Search is not available for this dataset
qid
int64 1
74.7M
| question
stringlengths 10
43.1k
| date
stringlengths 10
10
| metadata
sequence | response_j
stringlengths 0
33.7k
| response_k
stringlengths 0
40.5k
|
---|---|---|---|---|---|
62,673,500 | I have a table like this.a,b,c,d,e are the columns of table
[![enter image description here](https://i.stack.imgur.com/K7nA3.png)](https://i.stack.imgur.com/K7nA3.png)
I want to find distinct records on a combination of group by(d,e) and do some operation on the table
The final table should remove duplicate keys.
The final table should look like below
[![enter image description here](https://i.stack.imgur.com/5vyO2.png)](https://i.stack.imgur.com/5vyO2.png)
I have done a query like
```
SELECT *
FROM (SELECT a+"cis" as a_1,
b+"cis1" as b_1,
c as c_1,
d+"cis2" as d_1,
e as e_1
ROW_NUMBER() OVER (PARTITION BY d, e order by d,e) as cnt
FROM table1
) x
WHERE cnt = 1;
```
I am getting results like
[![enter image description here](https://i.stack.imgur.com/mlAZ7.png)](https://i.stack.imgur.com/mlAZ7.png)
How can I get the actual result
Thanks in advance | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673500",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12464504/"
] | I think i found a solution
```
SELECT concat(x.a,"cis") as a_1,concat(x.b,'cis1') as b_1,x.c as c_1,
concat(x.d,'cis2') as d_1,x.e as e_1 FROM (SELECT a,b,c,d,e,
ROW_NUMBER() OVER (PARTITION BY d, e order by d,e) as cnt FROM table ) x
WHERE cnt = 1
``` | You can try below way -
```
SELECT concat(a,'cis') as a_1,
concat(b,'cis1') as b_1,
c as c_1,
concat(d,'cis2') as d_1,
max(e) as e_1
FROM table1
group by concat(a,'cis'),concat(b,'cis1'),c,concat(d,'cis2')
``` |
62,673,500 | I have a table like this.a,b,c,d,e are the columns of table
[![enter image description here](https://i.stack.imgur.com/K7nA3.png)](https://i.stack.imgur.com/K7nA3.png)
I want to find distinct records on a combination of group by(d,e) and do some operation on the table
The final table should remove duplicate keys.
The final table should look like below
[![enter image description here](https://i.stack.imgur.com/5vyO2.png)](https://i.stack.imgur.com/5vyO2.png)
I have done a query like
```
SELECT *
FROM (SELECT a+"cis" as a_1,
b+"cis1" as b_1,
c as c_1,
d+"cis2" as d_1,
e as e_1
ROW_NUMBER() OVER (PARTITION BY d, e order by d,e) as cnt
FROM table1
) x
WHERE cnt = 1;
```
I am getting results like
[![enter image description here](https://i.stack.imgur.com/mlAZ7.png)](https://i.stack.imgur.com/mlAZ7.png)
How can I get the actual result
Thanks in advance | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673500",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12464504/"
] | I think i found a solution
```
SELECT concat(x.a,"cis") as a_1,concat(x.b,'cis1') as b_1,x.c as c_1,
concat(x.d,'cis2') as d_1,x.e as e_1 FROM (SELECT a,b,c,d,e,
ROW_NUMBER() OVER (PARTITION BY d, e order by d,e) as cnt FROM table ) x
WHERE cnt = 1
``` | I don't see why your are using `row_number()`. How about this?
```
SELECT concat(x.a, 'cis') as a_1, concat(x.b, 'cis1') as b_1, x.c as c_1,
concat(MIN(x.d), 'cis2') as d_1, x.e as e_1F
FROM t
GROUP BY x.a, x.b, x.c, x.e
```
You should also be able to use `||` for string concatenation:
```
SELECT (x.a || 'cis') as a_1, (x.b || 'cis1') as b_1, x.c as c_1,
(MIN(x.d) || 'cis2') as d_1, x.e as e_1F
FROM t;
``` |
62,673,504 | I have a pair of coordinates (lat, long).
I need to generate an image of displaying these coordinates on the map.
And then generate such images with other coordinates in the future without the Internet.
Please tell me whether there are solutions that allow you to display coordinates offline?
Upd: is there any opportunity to download maps offline , eg: gps tracker maps or something like that?
thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7136335/"
] | I think i found a solution
```
SELECT concat(x.a,"cis") as a_1,concat(x.b,'cis1') as b_1,x.c as c_1,
concat(x.d,'cis2') as d_1,x.e as e_1 FROM (SELECT a,b,c,d,e,
ROW_NUMBER() OVER (PARTITION BY d, e order by d,e) as cnt FROM table ) x
WHERE cnt = 1
``` | You can try below way -
```
SELECT concat(a,'cis') as a_1,
concat(b,'cis1') as b_1,
c as c_1,
concat(d,'cis2') as d_1,
max(e) as e_1
FROM table1
group by concat(a,'cis'),concat(b,'cis1'),c,concat(d,'cis2')
``` |
62,673,504 | I have a pair of coordinates (lat, long).
I need to generate an image of displaying these coordinates on the map.
And then generate such images with other coordinates in the future without the Internet.
Please tell me whether there are solutions that allow you to display coordinates offline?
Upd: is there any opportunity to download maps offline , eg: gps tracker maps or something like that?
thank you | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673504",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7136335/"
] | I think i found a solution
```
SELECT concat(x.a,"cis") as a_1,concat(x.b,'cis1') as b_1,x.c as c_1,
concat(x.d,'cis2') as d_1,x.e as e_1 FROM (SELECT a,b,c,d,e,
ROW_NUMBER() OVER (PARTITION BY d, e order by d,e) as cnt FROM table ) x
WHERE cnt = 1
``` | I don't see why your are using `row_number()`. How about this?
```
SELECT concat(x.a, 'cis') as a_1, concat(x.b, 'cis1') as b_1, x.c as c_1,
concat(MIN(x.d), 'cis2') as d_1, x.e as e_1F
FROM t
GROUP BY x.a, x.b, x.c, x.e
```
You should also be able to use `||` for string concatenation:
```
SELECT (x.a || 'cis') as a_1, (x.b || 'cis1') as b_1, x.c as c_1,
(MIN(x.d) || 'cis2') as d_1, x.e as e_1F
FROM t;
``` |
62,673,530 | So I am having a state somewhat like this
```
this.state={
angles:{}
}
```
So how can I do setState on this empty object. For instance if I want to set a key and value inside my empty angles. How can I do that. ( Likewise I want 0:90 inside my `this.state.anlges`.
After setting the state it should look like
```
this.state={
angles:{0:90}
}
```
Thanks in advance. Need to pass both the 0 and 90 as variables. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13234895/"
] | You'd do it by setting a new `angles` object, like this if you want to completely replace it:
```
this.setState({angles: {0: 90}});
```
or like this if you want to preserve any other properties and just replace the `0` property:
```
// Callback form (often best)
this.setState(({angles}) => ({angles: {...angles, 0: 90}}));
```
or
```
// Using the current state (often okay, but not if there may be other state updates pending)
this.setState({angles: {...this.state.angles, 0: 90}});
```
---
In a comment you've asked:
>
> Actually I need to pass 0 and 90 as variables. For instance consider 0 as one variable and 90 as one variable. Then in that case How can i do that?
>
>
>
In the above where I have `0: 90` you can use computed property notation: `[propertyname]: propertyvalue` where `propertyname` is the variable containing the property name and `propertyvalue` is the variable containing the property value. For instance, here's that last example with those variables:
```
this.setState({angles: {...this.state.angles, [propertyname]: propertyvalue}});
``` | You can write something like this:
```
this.setState((currentState) => ({
angles: {
...currentState.angles,
0: 90,
}
}));
```
be aware that number as key in objects is not recommended
if both 0 and 90 are values then `angles` should be an array containing duos of values.
example:
```
angles: [[0,90], [60,45], [0, 45]]
```
to do this within your state you would to something like this:
```
// initial state:
this.state = {
angles: [],
}
// add a value:
this.setState(({angles}) => ({
angles: angles.concat([[0,90]])
}))
```
note the double array syntax in concat, it is necessary. Without this you would end up with a flat 1 dimension array |
62,673,530 | So I am having a state somewhat like this
```
this.state={
angles:{}
}
```
So how can I do setState on this empty object. For instance if I want to set a key and value inside my empty angles. How can I do that. ( Likewise I want 0:90 inside my `this.state.anlges`.
After setting the state it should look like
```
this.state={
angles:{0:90}
}
```
Thanks in advance. Need to pass both the 0 and 90 as variables. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13234895/"
] | You can write something like this:
```
this.setState((currentState) => ({
angles: {
...currentState.angles,
0: 90,
}
}));
```
be aware that number as key in objects is not recommended
if both 0 and 90 are values then `angles` should be an array containing duos of values.
example:
```
angles: [[0,90], [60,45], [0, 45]]
```
to do this within your state you would to something like this:
```
// initial state:
this.state = {
angles: [],
}
// add a value:
this.setState(({angles}) => ({
angles: angles.concat([[0,90]])
}))
```
note the double array syntax in concat, it is necessary. Without this you would end up with a flat 1 dimension array | I think this is the simplest way to do it:
```
this.setState({angles: {0:99}});
``` |
62,673,530 | So I am having a state somewhat like this
```
this.state={
angles:{}
}
```
So how can I do setState on this empty object. For instance if I want to set a key and value inside my empty angles. How can I do that. ( Likewise I want 0:90 inside my `this.state.anlges`.
After setting the state it should look like
```
this.state={
angles:{0:90}
}
```
Thanks in advance. Need to pass both the 0 and 90 as variables. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13234895/"
] | You can write something like this:
```
this.setState((currentState) => ({
angles: {
...currentState.angles,
0: 90,
}
}));
```
be aware that number as key in objects is not recommended
if both 0 and 90 are values then `angles` should be an array containing duos of values.
example:
```
angles: [[0,90], [60,45], [0, 45]]
```
to do this within your state you would to something like this:
```
// initial state:
this.state = {
angles: [],
}
// add a value:
this.setState(({angles}) => ({
angles: angles.concat([[0,90]])
}))
```
note the double array syntax in concat, it is necessary. Without this you would end up with a flat 1 dimension array | You probably want something like this:
```js
const state = {
angles: {}
};
const setThisToState = { 0: 90 };
const key = Object.keys(setThisToState).toString();
const value = Object.values(setThisToState).toString();
state.angels = { [key]: value }; // { angles: { '0': '90' }
```
**EDIT:**
Since you asked for having 0 and 90 in variables, I wanted to point out the computed property notation.
```
class YourClass extends React.Component {
constructor(props) {
super(props);
this.state = {
angles: {}
}
}
componendDidMount() {
// using object destructuring here
const { angles } = this.state;
// say you are fetching data from an API here
fetch(fromAPI)
.then(response => response.json())
.then(data => { // say data.coordinates = { 0: 90 }
const key = Object.keys(data.coordinates).toString();
const value = Object.values(data.coordinates).toString();
// this overrides current state.angles. To keep values of state.angles use
// the spread operator as mentioned below
this.setState({ angles: { [key]: value }})
// spread out current state.angles if you wanna keep the values + add a
// "new object"
this.setState({ angles: ...angles, [key]: value})
})
}
}
``` |
62,673,530 | So I am having a state somewhat like this
```
this.state={
angles:{}
}
```
So how can I do setState on this empty object. For instance if I want to set a key and value inside my empty angles. How can I do that. ( Likewise I want 0:90 inside my `this.state.anlges`.
After setting the state it should look like
```
this.state={
angles:{0:90}
}
```
Thanks in advance. Need to pass both the 0 and 90 as variables. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13234895/"
] | You'd do it by setting a new `angles` object, like this if you want to completely replace it:
```
this.setState({angles: {0: 90}});
```
or like this if you want to preserve any other properties and just replace the `0` property:
```
// Callback form (often best)
this.setState(({angles}) => ({angles: {...angles, 0: 90}}));
```
or
```
// Using the current state (often okay, but not if there may be other state updates pending)
this.setState({angles: {...this.state.angles, 0: 90}});
```
---
In a comment you've asked:
>
> Actually I need to pass 0 and 90 as variables. For instance consider 0 as one variable and 90 as one variable. Then in that case How can i do that?
>
>
>
In the above where I have `0: 90` you can use computed property notation: `[propertyname]: propertyvalue` where `propertyname` is the variable containing the property name and `propertyvalue` is the variable containing the property value. For instance, here's that last example with those variables:
```
this.setState({angles: {...this.state.angles, [propertyname]: propertyvalue}});
``` | I think this is the simplest way to do it:
```
this.setState({angles: {0:99}});
``` |
62,673,530 | So I am having a state somewhat like this
```
this.state={
angles:{}
}
```
So how can I do setState on this empty object. For instance if I want to set a key and value inside my empty angles. How can I do that. ( Likewise I want 0:90 inside my `this.state.anlges`.
After setting the state it should look like
```
this.state={
angles:{0:90}
}
```
Thanks in advance. Need to pass both the 0 and 90 as variables. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13234895/"
] | You'd do it by setting a new `angles` object, like this if you want to completely replace it:
```
this.setState({angles: {0: 90}});
```
or like this if you want to preserve any other properties and just replace the `0` property:
```
// Callback form (often best)
this.setState(({angles}) => ({angles: {...angles, 0: 90}}));
```
or
```
// Using the current state (often okay, but not if there may be other state updates pending)
this.setState({angles: {...this.state.angles, 0: 90}});
```
---
In a comment you've asked:
>
> Actually I need to pass 0 and 90 as variables. For instance consider 0 as one variable and 90 as one variable. Then in that case How can i do that?
>
>
>
In the above where I have `0: 90` you can use computed property notation: `[propertyname]: propertyvalue` where `propertyname` is the variable containing the property name and `propertyvalue` is the variable containing the property value. For instance, here's that last example with those variables:
```
this.setState({angles: {...this.state.angles, [propertyname]: propertyvalue}});
``` | You probably want something like this:
```js
const state = {
angles: {}
};
const setThisToState = { 0: 90 };
const key = Object.keys(setThisToState).toString();
const value = Object.values(setThisToState).toString();
state.angels = { [key]: value }; // { angles: { '0': '90' }
```
**EDIT:**
Since you asked for having 0 and 90 in variables, I wanted to point out the computed property notation.
```
class YourClass extends React.Component {
constructor(props) {
super(props);
this.state = {
angles: {}
}
}
componendDidMount() {
// using object destructuring here
const { angles } = this.state;
// say you are fetching data from an API here
fetch(fromAPI)
.then(response => response.json())
.then(data => { // say data.coordinates = { 0: 90 }
const key = Object.keys(data.coordinates).toString();
const value = Object.values(data.coordinates).toString();
// this overrides current state.angles. To keep values of state.angles use
// the spread operator as mentioned below
this.setState({ angles: { [key]: value }})
// spread out current state.angles if you wanna keep the values + add a
// "new object"
this.setState({ angles: ...angles, [key]: value})
})
}
}
``` |
62,673,530 | So I am having a state somewhat like this
```
this.state={
angles:{}
}
```
So how can I do setState on this empty object. For instance if I want to set a key and value inside my empty angles. How can I do that. ( Likewise I want 0:90 inside my `this.state.anlges`.
After setting the state it should look like
```
this.state={
angles:{0:90}
}
```
Thanks in advance. Need to pass both the 0 and 90 as variables. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673530",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13234895/"
] | I think this is the simplest way to do it:
```
this.setState({angles: {0:99}});
``` | You probably want something like this:
```js
const state = {
angles: {}
};
const setThisToState = { 0: 90 };
const key = Object.keys(setThisToState).toString();
const value = Object.values(setThisToState).toString();
state.angels = { [key]: value }; // { angles: { '0': '90' }
```
**EDIT:**
Since you asked for having 0 and 90 in variables, I wanted to point out the computed property notation.
```
class YourClass extends React.Component {
constructor(props) {
super(props);
this.state = {
angles: {}
}
}
componendDidMount() {
// using object destructuring here
const { angles } = this.state;
// say you are fetching data from an API here
fetch(fromAPI)
.then(response => response.json())
.then(data => { // say data.coordinates = { 0: 90 }
const key = Object.keys(data.coordinates).toString();
const value = Object.values(data.coordinates).toString();
// this overrides current state.angles. To keep values of state.angles use
// the spread operator as mentioned below
this.setState({ angles: { [key]: value }})
// spread out current state.angles if you wanna keep the values + add a
// "new object"
this.setState({ angles: ...angles, [key]: value})
})
}
}
``` |
62,673,550 | I've recently cloned my netlify [site](http://yonseiuicscribe.netlify.app). After deploying via netlify on my cloned github repo, I get the following error
```
gatsby-plugin-netlify-cms" threw an error while running the onCreateWebpackConfig lifecycle:
Module build failed (from ./node_modules/gatsby/dist/utils/babel-loader.js):
```
This has never happened on the original site before. `gatsby develop` and `gatsby build` works just fine on my clone. I've made a separate algolia account and new ENV keys for the clone to resolve potential conflicts.
I've checked both the original and clone `babel-loader.js` files and they are identical with `stage="test"`
Is one of my files related to babel not properly updated or something?
Original Github [repo](https://github.com/develijahlee/yonseiuicscribe)
Cloned Github [repo](https://github.com/uicscribe/yonseiuicscribe)
My full error report:
```
6:11:05 PM: Build ready to start
6:11:07 PM: build-image version: 9d79ad851d6eff3969322d6e5b1df3d597650c41
6:11:07 PM: build-image tag: v3.3.19
6:11:07 PM: buildbot version: 8e2e5a3a5212190d0490c1372e313994f9085345
6:11:08 PM: Fetching cached dependencies
6:11:08 PM: Failed to fetch cache, continuing with build
6:11:08 PM: Starting to prepare the repo for build
6:11:08 PM: No cached dependencies found. Cloning fresh repo
6:11:08 PM: git clone https://github.com/uicscribe/yonseiuicscribe
6:11:22 PM: Preparing Git Reference refs/heads/master
6:11:25 PM: Different publish path detected, going to use the one specified in the Netlify configuration file: 'public' versus 'public/' in the Netlify UI
6:11:25 PM: Starting build script
6:11:25 PM: Installing dependencies
6:11:25 PM: Python version set to 2.7
6:11:26 PM: v12.18.0 is already installed.
6:11:26 PM: Now using node v12.18.0 (npm v6.14.4)
6:11:26 PM: Started restoring cached build plugins
6:11:26 PM: Finished restoring cached build plugins
6:11:26 PM: Attempting ruby version 2.7.1, read from environment
6:11:28 PM: Using ruby version 2.7.1
6:11:28 PM: Using PHP version 5.6
6:11:28 PM: 5.2 is already installed.
6:11:28 PM: Using Swift version 5.2
6:11:28 PM: Started restoring cached node modules
6:11:28 PM: Finished restoring cached node modules
6:11:28 PM: Installing NPM modules using NPM version 6.14.4
6:12:18 PM: > sharp@0.25.3 install /opt/build/repo/node_modules/gatsby-plugin-manifest/node_modules/sharp
6:12:18 PM: > (node install/libvips && node install/dll-copy && prebuild-install --runtime=napi) || (node-gyp rebuild && node install/dll-copy)
6:12:18 PM: info sharp Downloading https://github.com/lovell/sharp-libvips/releases/download/v8.9.1/libvips-8.9.1-linux-x64.tar.gz
6:12:20 PM: > sharp@0.25.3 install /opt/build/repo/node_modules/sharp
6:12:20 PM: > (node install/libvips && node install/dll-copy && prebuild-install --runtime=napi) || (node-gyp rebuild && node install/dll-copy)
6:12:20 PM: info sharp Using cached /opt/buildhome/.npm/_libvips/libvips-8.9.1-linux-x64.tar.gz
6:12:21 PM: > node-sass@4.13.1 install /opt/build/repo/node_modules/node-sass
6:12:21 PM: > node scripts/install.js
6:12:22 PM: Downloading binary from https://github.com/sass/node-sass/releases/download/v4.13.1/linux-x64-72_binding.node
6:12:22 PM: Download complete
6:12:22 PM: Binary saved to /opt/build/repo/node_modules/node-sass/vendor/linux-x64-72/binding.node
6:12:22 PM: Caching binary to /opt/buildhome/.npm/node-sass/4.13.1/linux-x64-72_binding.node
6:12:23 PM: > core-js@3.6.5 postinstall /opt/build/repo/node_modules/@jimp/plugin-circle/node_modules/core-js
6:12:23 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:23 PM: > core-js@3.6.5 postinstall /opt/build/repo/node_modules/@jimp/plugin-fisheye/node_modules/core-js
6:12:23 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:23 PM: > core-js@3.6.5 postinstall /opt/build/repo/node_modules/@jimp/plugin-shadow/node_modules/core-js
6:12:23 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:23 PM: > core-js@3.6.5 postinstall /opt/build/repo/node_modules/@jimp/plugin-threshold/node_modules/core-js
6:12:23 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:23 PM: > core-js@2.6.11 postinstall /opt/build/repo/node_modules/core-js
6:12:23 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:23 PM: > core-js-pure@3.6.4 postinstall /opt/build/repo/node_modules/core-js-pure
6:12:23 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:24 PM: > core-js@3.6.5 postinstall /opt/build/repo/node_modules/gatsby-plugin-sharp/node_modules/core-js
6:12:24 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:24 PM: > core-js@3.6.5 postinstall /opt/build/repo/node_modules/gatsby-transformer-sharp/node_modules/core-js
6:12:24 PM: > node -e "try{require('./postinstall')}catch(e){}"
6:12:25 PM: > gatsby-telemetry@1.1.47 postinstall /opt/build/repo/node_modules/gatsby-telemetry
6:12:25 PM: > node src/postinstall.js || true
6:12:25 PM: > cwebp-bin@5.1.0 postinstall /opt/build/repo/node_modules/cwebp-bin
6:12:25 PM: > node lib/install.js
6:12:26 PM: ✔ cwebp pre-build test passed successfully
6:12:26 PM: > mozjpeg@6.0.1 postinstall /opt/build/repo/node_modules/mozjpeg
6:12:26 PM: > node lib/install.js
6:12:27 PM: ✔ mozjpeg pre-build test passed successfully
6:12:27 PM: > pngquant-bin@5.0.2 postinstall /opt/build/repo/node_modules/pngquant-bin
6:12:27 PM: > node lib/install.js
6:12:27 PM: ✔ pngquant pre-build test passed successfully
6:12:27 PM: > gatsby-cli@2.8.27 postinstall /opt/build/repo/node_modules/gatsby/node_modules/gatsby-cli
6:12:27 PM: > node scripts/postinstall.js
6:12:27 PM: > gatsby@2.19.7 postinstall /opt/build/repo/node_modules/gatsby
6:12:27 PM: > node scripts/postinstall.js
6:12:28 PM: > node-sass@4.13.1 postinstall /opt/build/repo/node_modules/node-sass
6:12:28 PM: > node scripts/build.js
6:12:28 PM: Binary found at /opt/build/repo/node_modules/node-sass/vendor/linux-x64-72/binding.node
6:12:28 PM: Testing binary
6:12:28 PM: Binary is fine
6:12:31 PM: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@1.2.11 (node_modules/fsevents):
6:12:31 PM: npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.2.11: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
6:12:31 PM: npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@2.1.2 (node_modules/chokidar/node_modules/fsevents):
6:12:31 PM: npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@2.1.2: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})
6:12:31 PM: added 2693 packages from 1182 contributors and audited 2766 packages in 62.382s
6:12:34 PM: 113 packages are looking for funding
6:12:34 PM: run `npm fund` for details
6:12:34 PM: found 93 vulnerabilities (90 low, 2 moderate, 1 high)
6:12:34 PM: run `npm audit fix` to fix them, or `npm audit` for details
6:12:35 PM: NPM modules installed
6:12:35 PM: Started restoring cached go cache
6:12:35 PM: Finished restoring cached go cache
6:12:35 PM: go version go1.14.4 linux/amd64
6:12:35 PM: go version go1.14.4 linux/amd64
6:12:35 PM: Installing missing commands
6:12:35 PM: Verify run directory
6:12:36 PM:
6:12:36 PM: ┌─────────────────────────────┐
6:12:36 PM: │ Netlify Build │
6:12:36 PM: └─────────────────────────────┘
6:12:36 PM:
6:12:36 PM: ❯ Version
6:12:36 PM: @netlify/build 2.0.20
6:12:36 PM:
6:12:36 PM: ❯ Flags
6:12:36 PM: deployId: 5efc532996e3900007f07514
6:12:36 PM: mode: buildbot
6:12:36 PM:
6:12:36 PM: ❯ Current directory
6:12:36 PM: /opt/build/repo
6:12:36 PM:
6:12:36 PM: ❯ Config file
6:12:36 PM: No config file was defined: using default values.
6:12:36 PM:
6:12:36 PM: ❯ Context
6:12:36 PM: production
6:12:36 PM:
6:12:36 PM: ┌────────────────────────────────┐
6:12:36 PM: │ 1. Build command from settings │
6:12:36 PM: └────────────────────────────────┘
6:12:36 PM:
6:12:36 PM: $ gatsby build
6:12:39 PM: success open and validate gatsby-configs - 0.030s
6:12:41 PM: success load plugins - 1.738s
6:12:41 PM: success onPreInit - 0.025s
6:12:41 PM: success delete html and css files from previous builds - 0.010s
6:12:41 PM: success initialize cache - 0.010s
6:12:41 PM: success copy gatsby files - 0.036s
6:12:41 PM: success onPreBootstrap - 0.005s
6:12:41 PM: success createSchemaCustomization - 0.019s
6:12:45 PM: success source and transform nodes - 4.156s
6:12:46 PM: success building schema - 0.802s
6:12:47 PM: success createPages - 0.295s
6:12:47 PM: success createPagesStatefully - 0.127s
6:12:47 PM: success onPreExtractQueries - 0.000s
6:12:47 PM: success update schema - 0.033s
6:12:47 PM: success extract queries from components - 0.698s
6:12:47 PM: success write out requires - 0.012s
6:12:47 PM: success write out redirect data - 0.001s
6:12:48 PM: success Build manifest and related icons - 0.246s
6:12:48 PM: success onPostBootstrap - 0.248s
6:12:48 PM: ⠀
6:12:48 PM: info bootstrap finished - 11.242 s
6:12:48 PM: ⠀
6:12:49 PM: error "gatsby-plugin-netlify-cms" threw an error while running the onCreateWebpackConfig lifecycle:
6:12:49 PM: Module build failed (from ./node_modules/gatsby/dist/utils/babel-loader.js):
6:12:49 PM: Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: No "exports" main resolved in /opt/build/repo/node_modules/@babel/helper-compilation-targets/package.json
6:12:49 PM: at applyExports (internal/modules/cjs/loader.js:491:9)
6:12:49 PM: at resolveExports (internal/modules/cjs/loader.js:507:23)
6:12:49 PM: at Function.Module._findPath (internal/modules/cjs/loader.js:635:31)
6:12:49 PM: at Function.Module._resolveFilename (internal/modules/cjs/loader.js:953:27)
6:12:49 PM: at Function.Module._load (internal/modules/cjs/loader.js:842:27)
6:12:49 PM: at Module.require (internal/modules/cjs/loader.js:1026:19)
6:12:49 PM: at require (/opt/build/repo/node_modules/v8-compile-cache/v8-compile-cache.js:159:20)
6:12:49 PM: at Object.<anonymous> (/opt/build/repo/node_modules/@babel/preset-env/lib/debug.js:8:33)
6:12:49 PM: at Module._compile (/opt/build/repo/node_modules/v8-compile-cache/v8-compile-cache.js:178:30)
6:12:49 PM: at Object.Module._extensions..js (internal/modules/cjs/loader.js:1158:10)
6:12:49 PM: at Module.load (internal/modules/cjs/loader.js:986:32)
6:12:49 PM: at Function.Module._load (internal/modules/cjs/loader.js:879:14)
6:12:49 PM: at Module.require (internal/modules/cjs/loader.js:1026:19)
6:12:49 PM: at require (/opt/build/repo/node_modules/v8-compile-cache/v8-compile-cache.js:159:20)
6:12:49 PM: at Object.<anonymous> (/opt/build/repo/node_modules/@babel/preset-env/lib/index.js:11:14)
6:12:49 PM: at Module._compile (/opt/build/repo/node_modules/v8-compile-cache/v8-compile-cache.js:178:30)
6:12:49 PM:
6:12:49 PM:
6:12:49 PM: ModuleBuildError: Module build failed (from ./node_modules/gatsby/dist/utils/b abel-loader.js):
6:12:49 PM: Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: No "exports" main resolved in /opt/buil d/repo/node_modules/@babel/helper-compilation-targets/package.json
6:12:49 PM:
6:12:49 PM: - loader.js:491 applyExports
6:12:49 PM: internal/modules/cjs/loader.js:491:9
6:12:49 PM:
6:12:49 PM: - loader.js:507 resolveExports
6:12:49 PM: internal/modules/cjs/loader.js:507:23
6:12:49 PM:
6:12:49 PM: - loader.js:635 Function.Module._findPath
6:12:49 PM: internal/modules/cjs/loader.js:635:31
6:12:49 PM:
6:12:49 PM: - loader.js:953 Function.Module._resolveFilename
6:12:49 PM: internal/modules/cjs/loader.js:953:27
6:12:49 PM:
6:12:49 PM: - loader.js:842 Function.Module._load
6:12:49 PM: internal/modules/cjs/loader.js:842:27
6:12:49 PM:
6:12:49 PM: - loader.js:1026 Module.require
6:12:49 PM: internal/modules/cjs/loader.js:1026:19
6:12:49 PM:
6:12:49 PM: - v8-compile-cache.js:159 require
6:12:49 PM: [repo]/[v8-compile-cache]/v8-compile-cache.js:159:20
6:12:49 PM:
6:12:49 PM: - debug.js:8 Object.<anonymous>
6:12:49 PM: [repo]/[@babel]/preset-env/lib/debug.js:8:33
6:12:49 PM:
6:12:49 PM: - v8-compile-cache.js:178 Module._compile
6:12:49 PM: [repo]/[v8-compile-cache]/v8-compile-cache.js:178:30
6:12:49 PM:
6:12:49 PM: - loader.js:1158 Object.Module._extensions..js
6:12:49 PM: internal/modules/cjs/loader.js:1158:10
6:12:49 PM:
6:12:49 PM: - loader.js:986 Module.load
6:12:49 PM: internal/modules/cjs/loader.js:986:32
6:12:49 PM:
6:12:49 PM: - loader.js:879 Function.Module._load
6:12:49 PM: internal/modules/cjs/loader.js:879:14
6:12:49 PM:
6:12:49 PM: - loader.js:1026 Module.require
6:12:49 PM: internal/modules/cjs/loader.js:1026:19
6:12:49 PM:
6:12:49 PM: - v8-compile-cache.js:159 require
6:12:49 PM: [repo]/[v8-compile-cache]/v8-compile-cache.js:159:20
6:12:49 PM:
6:12:49 PM: - index.js:11 Object.<anonymous>
6:12:49 PM: [repo]/[@babel]/preset-env/lib/index.js:11:14
6:12:49 PM:
6:12:49 PM: - v8-compile-cache.js:178 Module._compile
6:12:49 PM: [repo]/[v8-compile-cache]/v8-compile-cache.js:178:30
6:12:49 PM:
6:12:49 PM: - NormalModule.js:316
6:12:49 PM: [repo]/[gatsby-plugin-netlify-cms]/[webpack]/lib/NormalModule.js:316:20
6:12:49 PM:
6:12:49 PM: - LoaderRunner.js:367
6:12:49 PM: [repo]/[loader-runner]/lib/LoaderRunner.js:367:11
6:12:49 PM:
6:12:49 PM: - LoaderRunner.js:233
6:12:49 PM: [repo]/[loader-runner]/lib/LoaderRunner.js:233:18
6:12:49 PM:
6:12:49 PM: - LoaderRunner.js:111 context.callback
6:12:49 PM: [repo]/[loader-runner]/lib/LoaderRunner.js:111:13
6:12:49 PM:
6:12:49 PM: - index.js:55
6:12:49 PM: [repo]/[babel-loader]/lib/index.js:55:103
6:12:49 PM:
6:12:49 PM:
6:12:49 PM: not finished run queries - 1.495s
6:12:49 PM: not finished Generating image thumbnails - 1.448s
6:12:49 PM: not finished Building production JavaScript and CSS bundles - 1.347s
6:12:49 PM:
6:12:49 PM: ┌─────────────────────────────┐
6:12:49 PM: │ "build.command" failed │
6:12:49 PM: └─────────────────────────────┘
6:12:49 PM:
6:12:49 PM: Error message
6:12:49 PM: Command failed with exit code 1: gatsby build
6:12:49 PM:
6:12:49 PM: Error location
6:12:49 PM: In Build command from settings:
6:12:49 PM: gatsby build
6:12:49 PM:
6:12:49 PM: Resolved config
6:12:49 PM: build:
6:12:49 PM: command: gatsby build
6:12:49 PM: publish: /opt/build/repo/public
6:12:49 PM: Caching artifacts
6:12:49 PM: Started saving node modules
6:12:49 PM: Finished saving node modules
6:12:49 PM: Started saving build plugins
6:12:49 PM: Finished saving build plugins
6:12:49 PM: Started saving pip cache
6:12:50 PM: Finished saving pip cache
6:12:50 PM: Started saving emacs cask dependencies
6:12:50 PM: Finished saving emacs cask dependencies
6:12:50 PM: Started saving maven dependencies
6:12:50 PM: Finished saving maven dependencies
6:12:50 PM: Started saving boot dependencies
6:12:50 PM: Finished saving boot dependencies
6:12:50 PM: Started saving go dependencies
6:12:50 PM: Finished saving go dependencies
6:12:53 PM: Error running command: Build script returned non-zero exit code: 1
6:12:53 PM: Failing build: Failed to build site
6:12:53 PM: Failed during stage 'building site': Build script returned non-zero exit code: 1
6:12:53 PM: Finished processing build request in 1m45.421790398s
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673550",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12146388/"
] | deleted `node_modules` and `package-lock.json` and did `npm install` to reinstall everything.
I think some babel dependencies were outdated.
Answer inspired from this [source](https://github.com/babel/babel/issues/11216) | You can try adding this to your netlify.toml
```
[functions]
# Specifies `esbuild` for functions bundling
node_bundler = "esbuild"
```
source: <https://github.com/netlify/netlify-plugin-nextjs/issues/597> |
62,673,553 | I'm trying to search for 3 (or more) specific RegEx inside HTML documents.
The HTML files do all have different forms and layouts but specific words, so I can search for the words.
Now, I'd like to return the line:
```html
<div>
<p>This 17 is A BIG test</p>
<p>This is another greaterly test</p>
<p>17738 that is yet <em>another</em> <strong>test</strong> with a CAR</p>
</div>
```
I've tried plenty of versions of the code but I'm stumbling in the dark currently.
```
import re
from bs4 import Tag, BeautifulSoup
text = """
<body>
<div>
<div>
<p>This 19 is A BIG test</p>
<p>This is another test</p>
<p>19 that is yet <em>another</em> great <strong>test</strong> with a CAR</p>
</div>
<div>
<p>This 17 is A BIG test</p>
<p>This is another greaterly test</p>
<p>17738 that is yet <em>another</em> <strong>test</strong> with a CAR</p>
</div>
</div>
</body>
"""
def searchme(bstag):
print("searchme")
regex1 = r"17738"
regex2 = r"CAR"
regex3 = r"greaterly"
switch1 = 0
switch2 = 0
switch3 = 0
result1 = bstag.find(string=re.compile(regex1, re.MULTILINE))
if len(result1) >= 1:
switch1 = 1
result2 = result1.parent.find(string=re.compile(regex2, re.MULTILINE))
if len(result2) >= 1:
switch2 = 1
result3 = result2.parent.find_all(string=re.compile(regex3, re.MULTILINE))
if len(result3) >= 1:
switch3 = 1
if switch1 == 1 and switch2 == 1 and switch3 == 1:
return bstag
else:
if bstag.parent is not None:
searchme(bstag.parent)
else:
searchme(result1.parent)
soup = BeautifulSoup(text, 'html.parser')
el = searchme(soup)
print(el)
```
EDIT 1
======
Updated the desired returned code | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673553",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2110476/"
] | deleted `node_modules` and `package-lock.json` and did `npm install` to reinstall everything.
I think some babel dependencies were outdated.
Answer inspired from this [source](https://github.com/babel/babel/issues/11216) | You can try adding this to your netlify.toml
```
[functions]
# Specifies `esbuild` for functions bundling
node_bundler = "esbuild"
```
source: <https://github.com/netlify/netlify-plugin-nextjs/issues/597> |
62,673,554 | I want to trigger longer running operation via rest request and WebFlux. The result of a call should just return an info that operation has started. The long running operation I want to run on different scheduler (e.g. Schedulers.single()). To achieve that I used subscribeOn:
```
Mono<RecalculationRequested> recalculateAll() {
return provider.size()
.doOnNext(size -> log.info("Size: {}", size))
.doOnNext(size -> recalculate(size))
.map(RecalculationRequested::new);
}
private void recalculate(int toRecalculateSize) {
Mono.just(toRecalculateSize)
.flatMapMany(this::toPages)
.flatMap(page -> recalculate(page))
.reduce(new RecalculationResult(), RecalculationResult::increment)
.subscribeOn(Schedulers.single())
.subscribe(result -> log.info("Result of recalculation - success:{}, failed: {}",
result.getSuccess(), result.getFailed()));
}
private Mono<RecalculationResult> recalculate(RecalculationPage pageToRecalculate) {
return provider.findElementsToRecalculate(pageToRecalculate.getPageNumber(), pageToRecalculate.getPageSize())
.flatMap(this::recalculateSingle)
.reduce(new RecalculationResult(), RecalculationResult::increment);
}
private Mono<RecalculationResult> recalculateSingle(ElementToRecalculate elementToRecalculate) {
return recalculationTrigger.recalculate(elementToRecalculate)
.doOnNext(result -> {
log.info("Finished recalculation for element: {}", elementToRecalculate);
})
.doOnError(error -> {
log.error("Error during recalculation for element: {}", elementToRecalculate, error);
});
}
```
From the above I want to call:
```
private void recalculate(int toRecalculateSize)
```
in a different thread. However, it does not run on a single thread pool - it uses a different thread pool. I would expect subscribeOn change it for the whole chain. What should I change and why to execute it in a single thread pool?
Just to mention - method:
```
provider.findElementsToRecalculate(...)
```
uses WebClient to get elements. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673554",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2092052/"
] | One caveat of `subscribeOn` is it does what it says: it runs the act of "subscribing" on the provided `Scheduler`. Subscribing flows from bottom to top (the `Subscriber` subscribes to its parent `Publisher`), at runtime.
Usually you see in documentation and presentations that `subscribeOn` affects the whole chain. That is because most operators / sources will not themselves change threads, and by default will start sending `onNext`/`onComplete`/`onError` signals from the thread from which they were subscribed to.
But as soon as one operator switches threads in that top-to-bottom data path, the reach of `subscribeOn` stops there. Typical example is when there is a `publishOn` in the chain.
The source of data in this case is `reactor-netty` and `netty`, which operate on their own threads and thus act as if there was a `publishOn` at the source.
For WebFlux, I'd say favor using `publishOn` in the main chain of operators, or alternatively use `subscribeOn` inside of inner chains, like inside `flatMap`. | As per the [documentation](https://projectreactor.io/docs/core/release/reference/) , all operators prefixed with doOn , are sometimes referred to as having a “side-effect”. They let you peek inside the sequence’s events without modifying them.
If you want to chain the 'recalculate' step after 'provider.size()' do it with flatMap. |
62,673,569 | I have a code which will read file data from the defined path and copies the data to my Macro workbook's sheet. When I am running the code line by line, it is working perfectly fine. But when I run the entire code, it is getting closed automatically without my permission. Below is my previous code.
```
Set thisWB = ThisWorkbook
'Open File and Copy Data
Set thatWB1 = Workbooks.Open(TimFilePath)
TFPLR = Cells(Rows.Count, "A").End(xlUp).Row
TFPLC = Cells(1, Columns.Count).End(xlToLeft).Column
TFPLCLTR = Split(Cells(1, TFPLC).Address(True, False), "$")(0)
'MsgBox TFPLCLTR
Range("A2:" & TFPLCLTR & TFPLR).Select
Selection.Copy
'Paste Selected Data in Time Ranges Sheet
'thisWB.Activate
thisWB.Sheets(TimSheet).Activate
If ActiveSheet.AutoFilterMode Then
ActiveSheet.AutoFilterMode = False
End If
Range("A2").PasteSpecial xlPasteValues
Application.CutCopyMode = False
'Close the File
thatWB1.Close SaveChanges:=False
```
After I made the below updates, the workbook is still closing.
```
Set thisWB = ThisWorkbook
'Open Time Range File and Copy Data
Set thatWB1 = Workbooks.Open(TimFilePath)
TFPLR = Cells(Rows.Count, "A").End(xlUp).Row
TFPLC = Cells(1, Columns.Count).End(xlToLeft).Column
TFPLCLTR = Split(Cells(1, TFPLC).Address(True, False), "$")(0)
'MsgBox TFPLCLTR
Range("A2:" & TFPLCLTR & TFPLR).Copy
'Selection.Copy
'Paste Selected Data in Time Ranges Sheet
thisWB.Sheets(TimSheet).Activate
If ActiveSheet.AutoFilterMode Then
ActiveSheet.AutoFilterMode = False
End If
thisWB.Sheets(TimSheet).Range("A2").PasteSpecial Paste:=xlPasteValues, Operation:=xlNone _
, SkipBlanks:=False, Transpose:=False
Application.CutCopyMode = False
'Close the Time ranges File
thatWB1.Close SaveChanges:=False
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673569",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13737789/"
] | Best way to solve this is by declaring a variable to fully control the open workbook in the same way you have for thisWB, eg:
```
Dim thatWB As Workbook
Set thatWB = Workbooks.Open(TimFilePath)
'do the work
thatWB.Close SaveChanges:=False
``` | This code should work without relying on `Active` anything.
```
Option Explicit 'This line is REALLY important.
'It forces you to declare each variable.
'Tools ~ Options ~ Editor. Tick 'Require Variable Declaration' to
'add it to each new module you create.
Public Sub Test()
'Set references to required files.
Dim TimFilePath As String
TimFilePath = "C:/Somepath/MyFile.xlsx"
Dim thatWB As Workbook
Set thatWB = Workbooks.Open(TimFilePath)
Dim thatWS As Worksheet
Set thatWS = thatWB.Worksheets("Sheet1")
Dim thisWB As Workbook
Set thisWB = ThisWorkbook 'Workbook containing this code.
Dim thisWS As Worksheet
Set thisWS = thisWB.Worksheets("Sheet1")
'Work on the files without selecting them.
Dim LastRow As Long
LastRow = thatWS.Cells.Find("*", , , , xlByRows, xlPrevious).Row
If LastRow = 0 Then LastRow = 1
Dim LastColumn As Long
LastColumn = thatWS.Cells.Find("*", , , , xlByColumns, xlPrevious).Column
If LastColumn = 0 Then LastColumn = 1
Dim LastCell As Range
Set LastCell = thatWS.Cells(LastRow, LastColumn)
thatWS.Range("A2", LastCell).Copy
thisWS.Range("A2").PasteSpecial xlPasteValues
thatWB.Close False
End Sub
``` |
62,673,585 | Currently, images with the attribute `loading="lazy"` (<https://web.dev/native-lazy-loading/>) are displayed immediately when loaded, i.e. without fade-in effect.
Is there a way to animate images with the `loading="lazy"` attribute when they are loaded and preferably without JavaScript?
I know, there are many lazyloading JavaScript libraries, but just because loading is now done natively by the browser, the handling via JavaScript feels like a step back again. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673585",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1329470/"
] | I managed to use a simple jQuery solution, since its already used in my case:
```
//fade in lazy loaded images
$('article img').on('load', function(){
$(this).addClass('loaded');
});
```
The images have 0 opacity by default and opacity 1 from the new class
```
img{
opacity: 0;
transition: opacity 300ms ease-in-out;
}
img.loaded{
opacity: 1;
}
```
This is not graceful as users w/o JS wont see the images ... but frankly i don't care ^^ To be honest, who has JS disabled? | Unfortunately it is not as simple as the simple jQuery solution above.
`.on('load')` does not work properly for all Browsers (like Chrome) as discussed here: [Check if an image is loaded (no errors) with jQuery](https://stackoverflow.com/questions/1977871/check-if-an-image-is-loaded-no-errors-with-jquery) |
62,673,586 | I am trying to create multiple consumers in a consumer group for parallel processing since we have heavy inflow of messages. I am using spring boot and KafkTemplate. How can we create multiple consumers belonging to single consumer group, in single instance of spring boot application?
Does having multiple methods annotated with @KafkaListener will create multiple consumers? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3696393/"
] | You have to use `ConcurrentMessageListenerContainer`. It delegates to one or more `KafkaMessageListenerContainer` instances to provide multi-threaded consumption.
```java
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
```
factory.setConcurrency(10) creates 10 `KafkaMessageListenerContainer` instances. Each instance gets some amount of partitions. It depends on the number of partitions you configured when you created the topic.
Some preparation steps:
```java
@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
private final static String BOOTSTRAP_ADDRESS = "localhost:9092";
private final static String CONSUMER_GROUP = "consumer-group-1";
private final static String TOPIC = "test-topic";
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
@KafkaListener(topics = TOPIC, containerFactory = "kafkaListenerContainerFactory")
public void listen(@Payload String message) {
logger.info(message);
}
public void start() {
try {
Thread.sleep(5000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
for (int i = 0; i < 10; i++) {
kafkaTemplate.send(TOPIC, i, String.valueOf(i), "Message " + i);
}
logger.info("All message are sent");
}
```
If you run the method above you can see that each `KafkaMessageListenerContainer` instance processes the messages being put into the partition which that instance serves.
Thread.sleep() is added to wait for the consumers to be initialized.
```java
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-4-C-1] r.s.c.KafkaConsumersDemo : Message 5
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-6-C-1] r.s.c.KafkaConsumersDemo : Message 7
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-7-C-1] r.s.c.KafkaConsumersDemo : Message 8
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-9-C-1] r.s.c.KafkaConsumersDemo : Message 1
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-0-C-1] r.s.c.KafkaConsumersDemo : Message 0
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-8-C-1] r.s.c.KafkaConsumersDemo : Message 9
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-3-C-1] r.s.c.KafkaConsumersDemo : Message 4
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-2-C-1] r.s.c.KafkaConsumersDemo : Message 3
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-1-C-1] r.s.c.KafkaConsumersDemo : Message 2
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-5-C-1] r.s.c.KafkaConsumersDemo : Message 6
``` | Yes, the `@KafkaListener` will create multiple consumers for you.
With that you can configure all of them to use the same topic and belong to the same group.
The Kafka coordinator will distribute partitions to your consumers.
Although if you have only one partition in the topic, the concurrency won't happen: a single partition is processed in a single thread.
Another option is indeed to configure a `concurrency` and again several consumers are going to be created according `concurrency <-> partition` state. |
62,673,586 | I am trying to create multiple consumers in a consumer group for parallel processing since we have heavy inflow of messages. I am using spring boot and KafkTemplate. How can we create multiple consumers belonging to single consumer group, in single instance of spring boot application?
Does having multiple methods annotated with @KafkaListener will create multiple consumers? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3696393/"
] | You have to use `ConcurrentMessageListenerContainer`. It delegates to one or more `KafkaMessageListenerContainer` instances to provide multi-threaded consumption.
```java
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
```
factory.setConcurrency(10) creates 10 `KafkaMessageListenerContainer` instances. Each instance gets some amount of partitions. It depends on the number of partitions you configured when you created the topic.
Some preparation steps:
```java
@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
private final static String BOOTSTRAP_ADDRESS = "localhost:9092";
private final static String CONSUMER_GROUP = "consumer-group-1";
private final static String TOPIC = "test-topic";
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
@KafkaListener(topics = TOPIC, containerFactory = "kafkaListenerContainerFactory")
public void listen(@Payload String message) {
logger.info(message);
}
public void start() {
try {
Thread.sleep(5000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
for (int i = 0; i < 10; i++) {
kafkaTemplate.send(TOPIC, i, String.valueOf(i), "Message " + i);
}
logger.info("All message are sent");
}
```
If you run the method above you can see that each `KafkaMessageListenerContainer` instance processes the messages being put into the partition which that instance serves.
Thread.sleep() is added to wait for the consumers to be initialized.
```java
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-4-C-1] r.s.c.KafkaConsumersDemo : Message 5
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-6-C-1] r.s.c.KafkaConsumersDemo : Message 7
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-7-C-1] r.s.c.KafkaConsumersDemo : Message 8
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-9-C-1] r.s.c.KafkaConsumersDemo : Message 1
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-0-C-1] r.s.c.KafkaConsumersDemo : Message 0
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-8-C-1] r.s.c.KafkaConsumersDemo : Message 9
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-3-C-1] r.s.c.KafkaConsumersDemo : Message 4
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-2-C-1] r.s.c.KafkaConsumersDemo : Message 3
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-1-C-1] r.s.c.KafkaConsumersDemo : Message 2
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-5-C-1] r.s.c.KafkaConsumersDemo : Message 6
``` | As @Salavat Yalalo suggested I made my Kafka container factory to be `ConcurrentKafkaListenerContainerFactory`. On the @KafkaListenere method I added option called concurrency which accepts an integer as a string which indicates number of consumers to be spanned, like below
```
@KafakListener(concurrency ="4", containerFactory="concurrentKafkaListenerContainerFactory(bean name of the factory)",..other optional values)
public void topicConsumer(Message<MyObject> myObject){
//.....
}
```
When ran, I see 4 consumers being created in a single consumer group. |
62,673,586 | I am trying to create multiple consumers in a consumer group for parallel processing since we have heavy inflow of messages. I am using spring boot and KafkTemplate. How can we create multiple consumers belonging to single consumer group, in single instance of spring boot application?
Does having multiple methods annotated with @KafkaListener will create multiple consumers? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673586",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/3696393/"
] | Yes, the `@KafkaListener` will create multiple consumers for you.
With that you can configure all of them to use the same topic and belong to the same group.
The Kafka coordinator will distribute partitions to your consumers.
Although if you have only one partition in the topic, the concurrency won't happen: a single partition is processed in a single thread.
Another option is indeed to configure a `concurrency` and again several consumers are going to be created according `concurrency <-> partition` state. | As @Salavat Yalalo suggested I made my Kafka container factory to be `ConcurrentKafkaListenerContainerFactory`. On the @KafkaListenere method I added option called concurrency which accepts an integer as a string which indicates number of consumers to be spanned, like below
```
@KafakListener(concurrency ="4", containerFactory="concurrentKafkaListenerContainerFactory(bean name of the factory)",..other optional values)
public void topicConsumer(Message<MyObject> myObject){
//.....
}
```
When ran, I see 4 consumers being created in a single consumer group. |
62,673,592 | I am currently trying to figure out a formula; I have the following: `=IF(G18>109,"A*",IF(G18>103,"A",IF(G18>85,"B",IF(G18>67,"C","F"))))`
This allows me to work out grades for my students in my French class but I want to add another part to the formula... I want it so that if `C18:F18` says "INC", then the cell this formula is in says "inc" for an incomplete course.
Is that possible? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846125/"
] | You have to use `ConcurrentMessageListenerContainer`. It delegates to one or more `KafkaMessageListenerContainer` instances to provide multi-threaded consumption.
```java
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
```
factory.setConcurrency(10) creates 10 `KafkaMessageListenerContainer` instances. Each instance gets some amount of partitions. It depends on the number of partitions you configured when you created the topic.
Some preparation steps:
```java
@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
private final static String BOOTSTRAP_ADDRESS = "localhost:9092";
private final static String CONSUMER_GROUP = "consumer-group-1";
private final static String TOPIC = "test-topic";
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
@KafkaListener(topics = TOPIC, containerFactory = "kafkaListenerContainerFactory")
public void listen(@Payload String message) {
logger.info(message);
}
public void start() {
try {
Thread.sleep(5000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
for (int i = 0; i < 10; i++) {
kafkaTemplate.send(TOPIC, i, String.valueOf(i), "Message " + i);
}
logger.info("All message are sent");
}
```
If you run the method above you can see that each `KafkaMessageListenerContainer` instance processes the messages being put into the partition which that instance serves.
Thread.sleep() is added to wait for the consumers to be initialized.
```java
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-4-C-1] r.s.c.KafkaConsumersDemo : Message 5
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-6-C-1] r.s.c.KafkaConsumersDemo : Message 7
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-7-C-1] r.s.c.KafkaConsumersDemo : Message 8
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-9-C-1] r.s.c.KafkaConsumersDemo : Message 1
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-0-C-1] r.s.c.KafkaConsumersDemo : Message 0
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-8-C-1] r.s.c.KafkaConsumersDemo : Message 9
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-3-C-1] r.s.c.KafkaConsumersDemo : Message 4
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-2-C-1] r.s.c.KafkaConsumersDemo : Message 3
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-1-C-1] r.s.c.KafkaConsumersDemo : Message 2
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-5-C-1] r.s.c.KafkaConsumersDemo : Message 6
``` | Yes, the `@KafkaListener` will create multiple consumers for you.
With that you can configure all of them to use the same topic and belong to the same group.
The Kafka coordinator will distribute partitions to your consumers.
Although if you have only one partition in the topic, the concurrency won't happen: a single partition is processed in a single thread.
Another option is indeed to configure a `concurrency` and again several consumers are going to be created according `concurrency <-> partition` state. |
62,673,592 | I am currently trying to figure out a formula; I have the following: `=IF(G18>109,"A*",IF(G18>103,"A",IF(G18>85,"B",IF(G18>67,"C","F"))))`
This allows me to work out grades for my students in my French class but I want to add another part to the formula... I want it so that if `C18:F18` says "INC", then the cell this formula is in says "inc" for an incomplete course.
Is that possible? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846125/"
] | You have to use `ConcurrentMessageListenerContainer`. It delegates to one or more `KafkaMessageListenerContainer` instances to provide multi-threaded consumption.
```java
@Bean
KafkaListenerContainerFactory<ConcurrentMessageListenerContainer<String, String>> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setConcurrency(10);
factory.getContainerProperties().setPollTimeout(3000);
return factory;
}
```
factory.setConcurrency(10) creates 10 `KafkaMessageListenerContainer` instances. Each instance gets some amount of partitions. It depends on the number of partitions you configured when you created the topic.
Some preparation steps:
```java
@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
private final static String BOOTSTRAP_ADDRESS = "localhost:9092";
private final static String CONSUMER_GROUP = "consumer-group-1";
private final static String TOPIC = "test-topic";
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_ADDRESS);
props.put(ConsumerConfig.GROUP_ID_CONFIG, CONSUMER_GROUP);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(props);
}
@KafkaListener(topics = TOPIC, containerFactory = "kafkaListenerContainerFactory")
public void listen(@Payload String message) {
logger.info(message);
}
public void start() {
try {
Thread.sleep(5000L);
} catch (InterruptedException e) {
e.printStackTrace();
}
for (int i = 0; i < 10; i++) {
kafkaTemplate.send(TOPIC, i, String.valueOf(i), "Message " + i);
}
logger.info("All message are sent");
}
```
If you run the method above you can see that each `KafkaMessageListenerContainer` instance processes the messages being put into the partition which that instance serves.
Thread.sleep() is added to wait for the consumers to be initialized.
```java
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-4-C-1] r.s.c.KafkaConsumersDemo : Message 5
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-6-C-1] r.s.c.KafkaConsumersDemo : Message 7
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-7-C-1] r.s.c.KafkaConsumersDemo : Message 8
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-9-C-1] r.s.c.KafkaConsumersDemo : Message 1
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-0-C-1] r.s.c.KafkaConsumersDemo : Message 0
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-8-C-1] r.s.c.KafkaConsumersDemo : Message 9
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-3-C-1] r.s.c.KafkaConsumersDemo : Message 4
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-2-C-1] r.s.c.KafkaConsumersDemo : Message 3
2020-07-01 15:48:34.801 INFO 201566 --- [ntainer#0-1-C-1] r.s.c.KafkaConsumersDemo : Message 2
2020-07-01 15:48:34.800 INFO 201566 --- [ntainer#0-5-C-1] r.s.c.KafkaConsumersDemo : Message 6
``` | As @Salavat Yalalo suggested I made my Kafka container factory to be `ConcurrentKafkaListenerContainerFactory`. On the @KafkaListenere method I added option called concurrency which accepts an integer as a string which indicates number of consumers to be spanned, like below
```
@KafakListener(concurrency ="4", containerFactory="concurrentKafkaListenerContainerFactory(bean name of the factory)",..other optional values)
public void topicConsumer(Message<MyObject> myObject){
//.....
}
```
When ran, I see 4 consumers being created in a single consumer group. |
62,673,592 | I am currently trying to figure out a formula; I have the following: `=IF(G18>109,"A*",IF(G18>103,"A",IF(G18>85,"B",IF(G18>67,"C","F"))))`
This allows me to work out grades for my students in my French class but I want to add another part to the formula... I want it so that if `C18:F18` says "INC", then the cell this formula is in says "inc" for an incomplete course.
Is that possible? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673592",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846125/"
] | Yes, the `@KafkaListener` will create multiple consumers for you.
With that you can configure all of them to use the same topic and belong to the same group.
The Kafka coordinator will distribute partitions to your consumers.
Although if you have only one partition in the topic, the concurrency won't happen: a single partition is processed in a single thread.
Another option is indeed to configure a `concurrency` and again several consumers are going to be created according `concurrency <-> partition` state. | As @Salavat Yalalo suggested I made my Kafka container factory to be `ConcurrentKafkaListenerContainerFactory`. On the @KafkaListenere method I added option called concurrency which accepts an integer as a string which indicates number of consumers to be spanned, like below
```
@KafakListener(concurrency ="4", containerFactory="concurrentKafkaListenerContainerFactory(bean name of the factory)",..other optional values)
public void topicConsumer(Message<MyObject> myObject){
//.....
}
```
When ran, I see 4 consumers being created in a single consumer group. |
62,673,595 | Can you tell me why the OUTPUT of this program is `ABBBCDE`?
It was my exam question.
```
#include <stdio.h>
int main (void)
{
int i;
for (i = 0; i < 2; i++)
{
printf ("A");
for (; i < 3; i++)
printf ("B");
printf ("C");
for (; i < 4; i++)
printf ("D");
printf ("E");
}
return 0;
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | ```
int
main (void)
{
int i;
for (i = 0; i < 2; i++)
{
printf ("A");
for (; i < 3; i++)
printf ("B");
printf ("C");
for (; i < 4; i++)
printf ("D");
printf ("E");
}
return 0;
}
```
is the same as
```
int
main (void)
{
int i;
for (i = 0; i < 2; i++)
{
printf ("A"); // prints once
for (; i < 3; i++)
{
printf ("B"); // i = 0 at beginning, and loops until i = 2 => 3 times
}
printf ("C"); // prints once
for (; i < 4; i++)
{
printf ("D"); // i = 3 at beginning, so it prints once
}
printf ("E"); // prints once
} // next loop, i is already 4, which is more than 2, so first loop stops
return 0;
}
```
Please don't write like this for production code. | If you indent the code properly it will help you understand what's going on:
```
#include <stdio.h>
int main(void)
{
int i;
for (i = 0; i < 2; i++)
{
printf("A"); // prints A 1 time, i is still 0
for (; i < 3; i++) // prints B 3 times, i will now be 3, the cycle ends
printf("B");
printf("C"); // prints C 1 time, this is not in any inner loop
for (; i < 4; i++) // prints D once, as i is 3, the cycle only runs once
printf("D");
printf("E"); // again prints E once
} // as i is 4 the cycle will end, the condition is i < 2
return 0;
}
``` |
62,673,595 | Can you tell me why the OUTPUT of this program is `ABBBCDE`?
It was my exam question.
```
#include <stdio.h>
int main (void)
{
int i;
for (i = 0; i < 2; i++)
{
printf ("A");
for (; i < 3; i++)
printf ("B");
printf ("C");
for (; i < 4; i++)
printf ("D");
printf ("E");
}
return 0;
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | ```
int
main (void)
{
int i;
for (i = 0; i < 2; i++)
{
printf ("A");
for (; i < 3; i++)
printf ("B");
printf ("C");
for (; i < 4; i++)
printf ("D");
printf ("E");
}
return 0;
}
```
is the same as
```
int
main (void)
{
int i;
for (i = 0; i < 2; i++)
{
printf ("A"); // prints once
for (; i < 3; i++)
{
printf ("B"); // i = 0 at beginning, and loops until i = 2 => 3 times
}
printf ("C"); // prints once
for (; i < 4; i++)
{
printf ("D"); // i = 3 at beginning, so it prints once
}
printf ("E"); // prints once
} // next loop, i is already 4, which is more than 2, so first loop stops
return 0;
}
```
Please don't write like this for production code. | Because you have a for loop that checks if i is less than three. It starts at zero, zero is less than three so it prints B once, then it adds one to i and i becomes one, which is still less than three, so it prints another B. Then it ads one to i, making it two, which is still less than zero, which makes it print B a third time. Then it adds another one to i, making it three, which is not less than three, so it continues the program by typing C. |
62,673,595 | Can you tell me why the OUTPUT of this program is `ABBBCDE`?
It was my exam question.
```
#include <stdio.h>
int main (void)
{
int i;
for (i = 0; i < 2; i++)
{
printf ("A");
for (; i < 3; i++)
printf ("B");
printf ("C");
for (; i < 4; i++)
printf ("D");
printf ("E");
}
return 0;
}
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673595",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | If you indent the code properly it will help you understand what's going on:
```
#include <stdio.h>
int main(void)
{
int i;
for (i = 0; i < 2; i++)
{
printf("A"); // prints A 1 time, i is still 0
for (; i < 3; i++) // prints B 3 times, i will now be 3, the cycle ends
printf("B");
printf("C"); // prints C 1 time, this is not in any inner loop
for (; i < 4; i++) // prints D once, as i is 3, the cycle only runs once
printf("D");
printf("E"); // again prints E once
} // as i is 4 the cycle will end, the condition is i < 2
return 0;
}
``` | Because you have a for loop that checks if i is less than three. It starts at zero, zero is less than three so it prints B once, then it adds one to i and i becomes one, which is still less than three, so it prints another B. Then it ads one to i, making it two, which is still less than zero, which makes it print B a third time. Then it adds another one to i, making it three, which is not less than three, so it continues the program by typing C. |
62,673,610 | I am using WebdriverIO version 5 and would like to see the logs of my test run.
I tried the command: `npm run rltest --logLevel=info`, but all I can see is the output of the spec reporter.
```
[chrome 83.0.4103.116 Mac OS X #0-0] Running: chrome (v83.0.4103.116) on Mac OS X
[chrome 83.0.4103.116 Mac OS X #0-0] Session ID: 16d526a6b3cc51f54110024b112b247c
[chrome 83.0.4103.116 Mac OS X #0-0]
[chrome 83.0.4103.116 Mac OS X #0-0] cancel button
[chrome 83.0.4103.116 Mac OS X #0-0] ✓ Verify that when the user clicks on the Cancel
button, no changes made to the list
[chrome 83.0.4103.116 Mac OS X #0-0]
[chrome 83.0.4103.116 Mac OS X #0-0] 1 passing (36.2s)
```
Is there a way to see more detailed logs? Do I need to configure anything inside `wdio.conf.js`?
Thanks | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673610",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10431512/"
] | Check documentation - [logLevel](https://webdriver.io/docs/options.html#loglevel)
So basically you need to setup this property in `wdio.conf.js`:
```
// ===================
// Test Configurations
// ===================
// Define all options that are relevant for the WebdriverIO instance here
//
// Level of logging verbosity: trace | debug | info | warn | error | silent
logLevel: 'debug',
``` | You should see the `wdio` logs in your console, as WebdriverIO defaults to dumping all the Selenium logs into `stdout`. Hope I understood correctly and you're talking about the `webdriver` logs, as below:
```
[0-0] 2020-07-01T09:28:53.869Z INFO webdriver: [GET] http://127.0.0.1:4444/wd/hub/session/933eeee4135ea0ca37d57f0b807cb29e/element/0.45562246642229964-9/displayed
[0-0] 2020-07-01T09:28:53.878Z INFO webdriver: RESULT true
[0-0] 2020-07-01T09:28:53.879Z INFO webdriver: COMMAND findElement("css selector", "#_evidon-l3 button")
[0-0] 2020-07-01T09:28:53.879Z INFO webdriver: [POST] http://127.0.0.1:4444/wd/hub/session/933eeee4135ea0ca37d57f0b807cb29e/element
[0-0] 2020-07-01T09:28:53.879Z INFO webdriver: DATA { using: 'css selector', value: '#_evidon-l3 button' }
[0-0] 2020-07-01T09:28:53.888Z INFO webdriver: RESULT { ELEMENT: '0.45562246642229964-10' }
[0-0] 2020-07-01T09:28:53.889Z INFO webdriver: COMMAND isElementDisplayed("0.45562246642229964-10")
```
If this is not the case, please check if you have `outputDir` option set inside the `wdio.conf.js` file. If indeed you have this setup, then you are overriding the default, sending the log streams to files inside that path:
e.g: `outputDir: 'wdio-logs',` (`wdio.conf.js` file)
[![enter image description here](https://i.stack.imgur.com/mv26l.png)](https://i.stack.imgur.com/mv26l.png)
The logs should be inside the `wdio-x-y.log` files. So, either debug your cases using the overridden path log files, or remove `outputDir` entry from your `wdio.conf.js` file if you want them inside the console.
---
Even better, you could be fancy and set `outputDir: process.env.CONSOLELOGS ? null : 'wdio/logs/path/here'`. Then you can run your checks with a system variable to trigger console logging:
`CONSOLELOGS=true npm run rltest <params>` |
62,673,611 | I’m trying to fetch some data, specifically the ‘html’ from the [1] array from the json displayed below. However when I console log welcomeTXT after setting the variable to that json selector it says
```
undefined
```
Any idea why the data is returning as undefined?
```
var welcomeTXT;
fetch('http://ip.ip.ip.ip:port/ghost/api/v3/content/pages/?key=276f4fc58131dfcf7a268514e5')
.then(response => response.json())
.then(data => {
console.log(data);
welcomeTXT = ['pages'][0]['title'];
console.log(welcomeTXT);
});
JSON
{
"pages":[
{
"id":"5efb6bbeeab44526aecc0abb",
"uuid":"38b78123-e5a8-4346-8f6e-6f57a1a284d0",
"title":"About Section",
"slug":"about-section",
"html":"<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>",
"comment_id":"5efb6bbeeab44526aecc0abb",
"feature_image":null,
"featured":false,
"visibility":"public",
"created_at":"2020-06-30T16:43:42.000+00:00",
"updated_at":"2020-06-30T16:58:53.000+00:00",
"published_at":"2020-06-30T16:58:37.000+00:00",
"custom_excerpt":null,
"codeinjection_head":null,
"codeinjection_foot":null,
"custom_template":null,
"canonical_url":null,
"url":"/about-section/",
"excerpt":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor\nincididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis\nnostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu\nfugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in\nculpa qui officia deserunt mollit anim id est laborum.",
"reading_time":0,
"page":true,
"og_image":null,
"og_title":null,
"og_description":null,
"twitter_image":null,
"twitter_title":null,
"twitter_description":null,
"meta_title":null,
"meta_description":null
},
{
"id":"5efb6f53eab44526aecc0ac4",
"uuid":"26463d5f-011e-46b3-a1e2-60e213e33f6f",
"title":"Welcome",
"slug":"welcome",
"html":"<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.</p>",
"comment_id":"5efb6f53eab44526aecc0ac4",
"feature_image":null,
"featured":false,
"visibility":"public",
"created_at":"2020-06-30T16:58:59.000+00:00",
"updated_at":"2020-06-30T16:59:02.000+00:00",
"published_at":"2020-06-30T16:59:02.000+00:00",
"custom_excerpt":null,
"codeinjection_head":null,
"codeinjection_foot":null,
"custom_template":null,
"canonical_url":null,
"url":"http:/welcome/",
"excerpt":"Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor\nincididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis\nnostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.\nDuis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu\nfugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in\nculpa qui officia deserunt mollit anim id est laborum.",
"reading_time":0,
"page":true,
"og_image":null,
"og_title":null,
"og_description":null,
"twitter_image":null,
"twitter_title":null,
"twitter_description":null,
"meta_title":null,
"meta_description":null
}
],
"meta":{
"pagination":{
"page":1,
"limit":15,
"pages":1,
"total":2,
"next":null,
"prev":null
}
}
}
```
Any help would be wonderful, as I can’t figure out why the JSON isn’t working | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673611",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13612518/"
] | you forgot `data` here:
```js
welcomeTXT = data['pages'][0]['title'];
``` | In promise then you missed the "data"
>
> welcomeTXT = ['pages'][0]['title'];
>
>
>
it should be
>
> welcomeTXT = data['pages'][0]['title'];
>
>
>
```
var welcomeTXT;
fetch('http://ip.ip.ip.ip4/ghost/api/v3/content/pages/?key=276f4fc58131dfcf7a268514e5')
.then(response => response.json())
.then(data => {
console.log(data);
welcomeTXT = data['pages'][0]['title'];
console.log(welcomeTXT);
});
``` |
62,673,628 | I want to have a field `tags` as `completion`, and I do not want this field to participate in the scoring, so I am trying to apply the mapping
```
"tags": {
"type": "completion",
"index": false
}
```
And I am getting an error
```
ElasticsearchStatusException[Elasticsearch exception [type=mapper_parsing_exception, reason=Mapping definition for [tags] has unsupported parameters: [index : false]]]
```
How should be the mapping? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673628",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13674325/"
] | The `completion` type stores the data in a different way in a finite state transducer (FST) data structure and not in the inverted index.
You can find more information about the completion suggester here:
* [Official documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-suggesters.html#completion-suggester)
* [Older article but explains the FST concept](https://www.elastic.co/blog/you-complete-me) | mapper\_parsing\_exception
says that it is not able to parse
suppose if you are writing in python
false --> False
data = "tags": {
"type": "completion",
"index": False
}
send this in request as json
json.loads(data) |
62,673,634 | I don't know why first it prints `data return` instead of after loop done
code
```
exports.put = async function (req, res, next) {
const data = Array.from(req.body);
let errors = false;
data.forEach(async (update, index) => {
try {
const result = await TypeofAccount.findByIdAndUpdate(
new mongoose.Types.ObjectId(update._id),
{
$set: { name: update.name },
},
{ new: true, runValidators: true },
);
console.log(index);
if (!result) {
errors = true;
console.log('errors');
return res.status(500).send();
}
} catch (e) {
return res.status(500).send();
}
});
console.log('data return');
res.json(data);
};
```
* data return
* PUT /api//typeofaccount 200 5.551 ms - 202
* 0
* 1
* 2 | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673634",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10842900/"
] | The `completion` type stores the data in a different way in a finite state transducer (FST) data structure and not in the inverted index.
You can find more information about the completion suggester here:
* [Official documentation](https://www.elastic.co/guide/en/elasticsearch/reference/current/search-suggesters.html#completion-suggester)
* [Older article but explains the FST concept](https://www.elastic.co/blog/you-complete-me) | mapper\_parsing\_exception
says that it is not able to parse
suppose if you are writing in python
false --> False
data = "tags": {
"type": "completion",
"index": False
}
send this in request as json
json.loads(data) |
62,673,652 | I added collapsing toolbar in my app but now it only collapses when a user scrolls the image view inside the CollapsingToolbarLayout. How can collapse the toolbar when a user scrolls from anywhere within the view?
this is my layout
```
<?xml version="1.0" encoding="utf-8"?>
<android.support.design.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
tools:context=".ui.ProfileActivity">
<android.support.design.widget.AppBarLayout
android:id="@+id/app_bar"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:fitsSystemWindows="true">
<android.support.design.widget.CollapsingToolbarLayout
android:id="@+id/toolbar_layout"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:fitsSystemWindows="true"
app:contentScrim="?attr/colorPrimary"
app:layout_scrollFlags="scroll|exitUntilCollapsed">
<ImageView
android:id="@+id/profile_image"
android:layout_width="match_parent"
android:layout_height="400dp"
android:scaleType="centerCrop"
android:src="@drawable/default_avatar"
app:layout_collapseMode="parallax"
app:layout_collapseParallaxMultiplier="0.7" />
<android.support.v7.widget.Toolbar
android:id="@+id/toolbar"
android:layout_width="match_parent"
android:layout_height="?attr/actionBarSize"
app:layout_collapseMode="pin"
app:popupTheme="@style/AppTheme.PopupOverlay" />
</android.support.design.widget.CollapsingToolbarLayout>
</android.support.design.widget.AppBarLayout>
<include layout="@layout/profile_contents"
app:layout_anchor="@+id/app_bar"
app:layout_anchorGravity="bottom"
android:layout_gravity="bottom"
android:layout_height="wrap_content"
android:layout_width="match_parent" />
</android.support.design.widget.CoordinatorLayout>
```
These are the contents of profile\_contents.xml for now because I am going to add more items in future.
```
<?xml version="1.0" encoding="utf-8"?>
<android.support.constraint.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
android:layout_width="match_parent"
android:layout_height="match_parent"
xmlns:app="http://schemas.android.com/apk/res-auto">
<TextView
android:id="@+id/profile_displayName"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="32dp"
android:text="loading name..."
android:textColor="@color/black"
android:textSize="24dp"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toTopOf="parent" />
<TextView
android:id="@+id/profile_status"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:layout_marginTop="32dp"
android:text="loading status..."
android:textColor="@color/black"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/profile_displayName" />
<Button
android:id="@+id/profile_send_req_btn"
android:layout_width="182dp"
android:layout_height="38dp"
android:layout_marginTop="44dp"
android:background="@drawable/profile_button_bg"
android:padding="10dp"
android:text="@string/send_friend_request"
android:textAllCaps="false"
android:textColor="@color/white"
app:layout_constraintEnd_toEndOf="parent"
app:layout_constraintHorizontal_bias="0.498"
app:layout_constraintStart_toStartOf="parent"
app:layout_constraintTop_toBottomOf="@+id/profile_status" />
</android.support.constraint.ConstraintLayout>
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673652",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10264518/"
] | Put your included layout inside `NestedScrollView` view, since the `ConstraintLayout` is not a scrollable view.
Like this:
```
<androidx.core.widget.NestedScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_gravity="bottom"
app:layout_anchor="@+id/app_bar"
app:layout_anchorGravity="bottom"
app:layout_behavior="@string/appbar_scrolling_view_behavior">
<include layout="@layout/profile_contents"
android:layout_height="wrap_content"
android:layout_width="match_parent" />
</androidx.core.widget.NestedScrollView>
```
Don't forget to put the `app:layout_behavior` in the `NestedScrollView` | Add this in the `profile_contents` layout's root view
```
app:layout_behavior="@string/appbar_scrolling_view_behavior"
```
>
> We need to define an association between the AppBarLayout and the View that will be scrolled. Add an `app:layout_behavior` to a RecyclerView or any other View capable of nested scrolling such as NestedScrollView. The support library contains a special string resource `@string/appbar_scrolling_view_behavior` that maps to `AppBarLayout.ScrollingViewBehavior`, which is used to notify the `AppBarLayout` when scroll events occur on this particular view. The behavior must be established on the view that triggers the event.
>
>
>
The above text is from here <https://guides.codepath.com/android/handling-scrolls-with-coordinatorlayout> |
62,673,653 | I am having an issue when running multiple tests one after another in webdriverio. When I am running multiple tests (for example a describe that contains multiple it) I am getting a recapcha that basically fails the test. Is there a way to overcome this?
Thanks | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673653",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10431512/"
] | Put your included layout inside `NestedScrollView` view, since the `ConstraintLayout` is not a scrollable view.
Like this:
```
<androidx.core.widget.NestedScrollView
android:layout_width="match_parent"
android:layout_height="match_parent"
android:layout_gravity="bottom"
app:layout_anchor="@+id/app_bar"
app:layout_anchorGravity="bottom"
app:layout_behavior="@string/appbar_scrolling_view_behavior">
<include layout="@layout/profile_contents"
android:layout_height="wrap_content"
android:layout_width="match_parent" />
</androidx.core.widget.NestedScrollView>
```
Don't forget to put the `app:layout_behavior` in the `NestedScrollView` | Add this in the `profile_contents` layout's root view
```
app:layout_behavior="@string/appbar_scrolling_view_behavior"
```
>
> We need to define an association between the AppBarLayout and the View that will be scrolled. Add an `app:layout_behavior` to a RecyclerView or any other View capable of nested scrolling such as NestedScrollView. The support library contains a special string resource `@string/appbar_scrolling_view_behavior` that maps to `AppBarLayout.ScrollingViewBehavior`, which is used to notify the `AppBarLayout` when scroll events occur on this particular view. The behavior must be established on the view that triggers the event.
>
>
>
The above text is from here <https://guides.codepath.com/android/handling-scrolls-with-coordinatorlayout> |
62,673,671 | I am trying to iterate through a set of results, similar to the below, so to select that I perform the below:
```
for a in browser.find_elements_by_css_selector(".inner-row"):
```
What I then want to do is return:
a. The x in the class next to time-x (e.g. 26940 in the example)
b. filter to bananas only
c. Grab the suffix to "row-x" in the id
For each result. I can then iterate through the results for each of these that meets the parameters.
I have tried the get attribute function but this doesn't return any results, and .text is out of the question due to no real information between the tags.
```
<div id="bookingResults bookingGroup-111">
<div id="row-1522076067"
class="row row-time group-111 time-26940 amOnly bananas groupOnly rule-1252"
style="display: block;">
<div class="lockOverlay lock-row-124" style="display: none;"><div class="lockInfoCont"><p class="lockedText">Locked <span class="miclub-icon icon-lock"></span></p></div><div class="lockTimer"></div></div>
<div class="col-lg-3 col-md-4 col-sm-4 col-xs-4 row-heading " id="heading-1522076067" >
<div class="row">
<div class="col-lg-4 col-md-4 col-sm-5 col-xs-5 row-heading-inner">
<h3>07:29 am</h3>
<h4>
Choose Me
<br/>
<span id="rule-name-row-1522076067" style="display: none">
</span>
</h4>
</div>
<div class="col-lg-8 col-md-8 col-sm-7 col-xs-7 row-heading-inner">
<button id="btn-book-group-1522076067"
class="btn btn-book-group hide"
title="Book Row" >
<span class="btn-label">BOOK GROUP</span>
</button>
<div class="row-information">
</div>
</div>
</div>
</div>
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1513726/"
] | I guess this html is the each of the `a` in your code then you can exract the `id` and `time` with following code:
```py
for a in browser.find_elements_by_css_selector(".inner-row"):
try:
el = a.find_element_by_css_selector("div.bananas")
print("id: %s", el.get_attribute("id").split("-")[1])
print("time: %s", [s for s in el.get_attribute("class")(" ") if "time-" in s][0].split("time-")[1])
except NoSuchElementException as e:
pass
``` | You can get the ids like so
```py
print([e.get_attribute('id') for e in driver.find_elements(By.CSS_SELECTOR, 'div.bananas')])
```
Prints `['row-1522076067']` |
62,673,671 | I am trying to iterate through a set of results, similar to the below, so to select that I perform the below:
```
for a in browser.find_elements_by_css_selector(".inner-row"):
```
What I then want to do is return:
a. The x in the class next to time-x (e.g. 26940 in the example)
b. filter to bananas only
c. Grab the suffix to "row-x" in the id
For each result. I can then iterate through the results for each of these that meets the parameters.
I have tried the get attribute function but this doesn't return any results, and .text is out of the question due to no real information between the tags.
```
<div id="bookingResults bookingGroup-111">
<div id="row-1522076067"
class="row row-time group-111 time-26940 amOnly bananas groupOnly rule-1252"
style="display: block;">
<div class="lockOverlay lock-row-124" style="display: none;"><div class="lockInfoCont"><p class="lockedText">Locked <span class="miclub-icon icon-lock"></span></p></div><div class="lockTimer"></div></div>
<div class="col-lg-3 col-md-4 col-sm-4 col-xs-4 row-heading " id="heading-1522076067" >
<div class="row">
<div class="col-lg-4 col-md-4 col-sm-5 col-xs-5 row-heading-inner">
<h3>07:29 am</h3>
<h4>
Choose Me
<br/>
<span id="rule-name-row-1522076067" style="display: none">
</span>
</h4>
</div>
<div class="col-lg-8 col-md-8 col-sm-7 col-xs-7 row-heading-inner">
<button id="btn-book-group-1522076067"
class="btn btn-book-group hide"
title="Book Row" >
<span class="btn-label">BOOK GROUP</span>
</button>
<div class="row-information">
</div>
</div>
</div>
</div>
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673671",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1513726/"
] | I guess this html is the each of the `a` in your code then you can exract the `id` and `time` with following code:
```py
for a in browser.find_elements_by_css_selector(".inner-row"):
try:
el = a.find_element_by_css_selector("div.bananas")
print("id: %s", el.get_attribute("id").split("-")[1])
print("time: %s", [s for s in el.get_attribute("class")(" ") if "time-" in s][0].split("time-")[1])
except NoSuchElementException as e:
pass
``` | To handle dynamic element Induce `WebDriverWait`() and wait for `visibility_of_all_elements_located`() and following css selector.
Then use **regular expression** to get the value from element attribute.
**Code**
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import re
driver=webdriver.Chrome()
driver.get("URL here")
elements=WebDriverWait(driver,10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,"div[id^='bookingResults']>div.bananas")))
for element in elements:
print(re.findall("row-(\d+)",element.get_attribute("id"))[0])
classatr=element.get_attribute("class")
print(re.findall("time-(\d+)",classatr)[0])
``` |
62,673,672 | There are a lot of similar questions to this, but none have helped me so I assume a nuance to my version which I'm missing.
I have two DataFrames: df1, df2 (of same dimensions which can me mapped 1-2-1 on unique column, ie Name) and I want all rows which exist in df1 and are different to the corresponding row in df2.
I've tried more elegant solutions involving .isin() and ugly solutions using loops. But nothing returns the correct solution.
I post below one of the less Pythonic solutions since I believe it displays what I am trying to do most explicitly:
```
df1['hash'] = df1[common_fields].apply(lambda x: hash(tuple(x)), axis=1)
df2['hash'] = df2[common_fields].apply(lambda x: hash(tuple(x)), axis=1)
df = pd.DataFrame(columns=df1.columns)
df2_hashes = df2['hash'].tolist()
for i in range(len(df1)):
if not df1['hash'].iloc[i] in df2_hashes:
df = df.append(df1.iloc[i])
```
NB. The above attempt, returns all row whether different or not. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4508879/"
] | I guess this html is the each of the `a` in your code then you can exract the `id` and `time` with following code:
```py
for a in browser.find_elements_by_css_selector(".inner-row"):
try:
el = a.find_element_by_css_selector("div.bananas")
print("id: %s", el.get_attribute("id").split("-")[1])
print("time: %s", [s for s in el.get_attribute("class")(" ") if "time-" in s][0].split("time-")[1])
except NoSuchElementException as e:
pass
``` | You can get the ids like so
```py
print([e.get_attribute('id') for e in driver.find_elements(By.CSS_SELECTOR, 'div.bananas')])
```
Prints `['row-1522076067']` |
62,673,672 | There are a lot of similar questions to this, but none have helped me so I assume a nuance to my version which I'm missing.
I have two DataFrames: df1, df2 (of same dimensions which can me mapped 1-2-1 on unique column, ie Name) and I want all rows which exist in df1 and are different to the corresponding row in df2.
I've tried more elegant solutions involving .isin() and ugly solutions using loops. But nothing returns the correct solution.
I post below one of the less Pythonic solutions since I believe it displays what I am trying to do most explicitly:
```
df1['hash'] = df1[common_fields].apply(lambda x: hash(tuple(x)), axis=1)
df2['hash'] = df2[common_fields].apply(lambda x: hash(tuple(x)), axis=1)
df = pd.DataFrame(columns=df1.columns)
df2_hashes = df2['hash'].tolist()
for i in range(len(df1)):
if not df1['hash'].iloc[i] in df2_hashes:
df = df.append(df1.iloc[i])
```
NB. The above attempt, returns all row whether different or not. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673672",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4508879/"
] | I guess this html is the each of the `a` in your code then you can exract the `id` and `time` with following code:
```py
for a in browser.find_elements_by_css_selector(".inner-row"):
try:
el = a.find_element_by_css_selector("div.bananas")
print("id: %s", el.get_attribute("id").split("-")[1])
print("time: %s", [s for s in el.get_attribute("class")(" ") if "time-" in s][0].split("time-")[1])
except NoSuchElementException as e:
pass
``` | To handle dynamic element Induce `WebDriverWait`() and wait for `visibility_of_all_elements_located`() and following css selector.
Then use **regular expression** to get the value from element attribute.
**Code**
```
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
import re
driver=webdriver.Chrome()
driver.get("URL here")
elements=WebDriverWait(driver,10).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR,"div[id^='bookingResults']>div.bananas")))
for element in elements:
print(re.findall("row-(\d+)",element.get_attribute("id"))[0])
classatr=element.get_attribute("class")
print(re.findall("time-(\d+)",classatr)[0])
``` |
62,673,697 | When I save a pair rdd by `rdd.repartition(1).saveAsTextFile(file_path)`, an error meets.
```
Py4JJavaError: An error occurred while calling o142.saveAsTextFile.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:957)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1499)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1478)
at org.apache.spark.api.java.JavaRDDLike$class.saveAsTextFile(JavaRDDLike.scala:550)
at org.apache.spark.api.java.AbstractJavaRDDLike.saveAsTextFile(JavaRDDLike.scala:45)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 2, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/spark/python/lib/pyspark.zip/pyspark/worker.py", line 253, in main
process()
File "/home/spark/python/lib/pyspark.zip/pyspark/worker.py", line 248, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/home/spark/python/pyspark/rdd.py", line 2440, in pipeline_func
return func(split, prev_func(split, iterator))
File "/home/spark/python/pyspark/rdd.py", line 2440, in pipeline_func
return func(split, prev_func(split, iterator))
File "/home/spark/python/pyspark/rdd.py", line 2440, in pipeline_func
return func(split, prev_func(split, iterator))
[Previous line repeated 2 more times]
File "/home/spark/python/pyspark/rdd.py", line 350, in func
return f(iterator)
File "/home/spark/python/pyspark/rdd.py", line 1951, in groupByKey
merger.mergeCombiners(it)
File "/home/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 288, in mergeCombiners
self._spill()
File "/home/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 735, in _spill
self.serializer.dump_stream([(k, self.data[k])], streams[h])
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 331, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 383, in dump_stream
bytes = self.serializer.dumps(vs)
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 634, in dumps
return zlib.compress(self.serializer.dumps(obj), 1)
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 562, in dumps
return pickle.dumps(obj, protocol)
MemoryError
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:330)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:470)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:453)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:284)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1651)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1639)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1638)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1638)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1872)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1821)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1810)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2034)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2055)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2087)
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
... 41 more
Caused by: org.apache.spark.api.python.PythonException: Traceback (most recent call last):
File "/home/spark/python/lib/pyspark.zip/pyspark/worker.py", line 253, in main
process()
File "/home/spark/python/lib/pyspark.zip/pyspark/worker.py", line 248, in process
serializer.dump_stream(func(split_index, iterator), outfile)
File "/home/spark/python/pyspark/rdd.py", line 2440, in pipeline_func
return func(split, prev_func(split, iterator))
File "/home/spark/python/pyspark/rdd.py", line 2440, in pipeline_func
return func(split, prev_func(split, iterator))
File "/home/spark/python/pyspark/rdd.py", line 2440, in pipeline_func
return func(split, prev_func(split, iterator))
[Previous line repeated 2 more times]
File "/home/spark/python/pyspark/rdd.py", line 350, in func
return f(iterator)
File "/home/spark/python/pyspark/rdd.py", line 1951, in groupByKey
merger.mergeCombiners(it)
File "/home/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 288, in mergeCombiners
self._spill()
File "/home/spark/python/lib/pyspark.zip/pyspark/shuffle.py", line 735, in _spill
self.serializer.dump_stream([(k, self.data[k])], streams[h])
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 331, in dump_stream
self.serializer.dump_stream(self._batched(iterator), stream)
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 383, in dump_stream
bytes = self.serializer.dumps(vs)
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 634, in dumps
return zlib.compress(self.serializer.dumps(obj), 1)
File "/home/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 562, in dumps
return pickle.dumps(obj, protocol)
MemoryError
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.handlePythonException(PythonRunner.scala:330)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:470)
at org.apache.spark.api.python.PythonRunner$$anon$1.read(PythonRunner.scala:453)
at org.apache.spark.api.python.BasePythonRunner$ReaderIterator.hasNext(PythonRunner.scala:284)
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:408)
at org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
... 1 more
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673697",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9276708/"
] | Repartion method is triggering full shuffle, and if the dataset is large driver will result into memory error. Try to increase the number of partitions vaue in repartitton. | Check your java version. By uninstalling 32bit and installing 64bit version solve mine. |
62,673,701 | `data = {string1,string2}`
The array data is fetched from PostgreSQL database that has a column of type text[] ie text array.
I need to empty this array.
OUTPUT:
`data = {}`
Can I use
`tablename.update_attributes!(data: "")`
OR
`tablename.data.map!{ |e| e.destroy }`
Context:
```
EMAILS.each do |email|
res = tablename.where('lower(email) = lower(?)',
"#{email}")
res.each do |u|
u.data.map! { |e| e.destroy } // this is where i want to empty the array
end
puts ""
end;
end;
```
I am very new in Rails. Can someone suggest something I can use? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673701",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8203419/"
] | Repartion method is triggering full shuffle, and if the dataset is large driver will result into memory error. Try to increase the number of partitions vaue in repartitton. | Check your java version. By uninstalling 32bit and installing 64bit version solve mine. |
62,673,726 | there is a react component stored inside of a state variable. I've attached a sample code and created a [codesandbox sample](https://codesandbox.io/s/mystifying-robinson-wz4uf). As you can see, `nameBadgeComponent` is supposed to display `{name}` field from state. It works fine for the default value, but does not react to changes in `name` variable. Is there a way to make it work without updating the `nameBadgeComponent` itself?
```
const [name, setName] = useState("DefaultName");
const [nameBadgeComponent] = useState(<h2>My name is: {name}</h2>);
return (
<div className="App">
{nameBadgeComponent}
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673726",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/9890829/"
] | You can't create a component from a useState. A state is all the properties and data that define in what condition/appearance/etc the component is. A useState allows to control a piece of information or data, but doesn't respond to changes, a component does.
The fact that you put "Component" in the name should give you a hint ;)
What you want to do is probably this :
```
function NameBadgeComponent({ name }) {
return <h2>My name is: {name}</h2>;
}
export default function App() {
const [name, setName] = useState("DefaultName");
return (
<div className="App">
<NameBadgeComponent name={name} />
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
This will update to the changes in name properly. | Why are you extracting the nameBadgeComponent to a state? That is useless as it only relies on `name`, keeping your code as close to the original as possible:
```js
const [name, setName] = useState("DefaultName");
const nameBadgeComponent = <h2>My name is: {name}</h2>;
return (
<div className="App">
{nameBadgeComponent}
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
Context as to why it does not work: if you use `useState` you are defining the starting state, the code in there is never run again unless you use `setState` (in this case `setNameBadgeComponent`)
Therefore its never updated
EDIT:
The other answer defining the extra component is more reactish code, if you are going to accept an answer accept that one. |
62,673,733 | I am a total beginner to python and coding. I Was just wondering if there is a way for python to read what I've typed into the console, For example: If it prints out a question, I can answer with many choices. This is what I've tried yet.
```
`answer = input("Vilken?")
if answer.lower().strip() == "1":
print("Okej! (1)")
elif answer.lower().strip() == "2":
print("Okej! (2)")
elif answer.lower().strip() == "3":
print("Okej! (3)")`
```
I got the code from a guy on youtube, for some reason, it doesn't read what I'm typing in.
I am trying to store what the player types on a variable, so I can later use it. In c# I used
`string (variable name) = Console.Readline()` If there is a way to do this is python, please let me know. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673733",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13846121/"
] | You can't create a component from a useState. A state is all the properties and data that define in what condition/appearance/etc the component is. A useState allows to control a piece of information or data, but doesn't respond to changes, a component does.
The fact that you put "Component" in the name should give you a hint ;)
What you want to do is probably this :
```
function NameBadgeComponent({ name }) {
return <h2>My name is: {name}</h2>;
}
export default function App() {
const [name, setName] = useState("DefaultName");
return (
<div className="App">
<NameBadgeComponent name={name} />
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
This will update to the changes in name properly. | Why are you extracting the nameBadgeComponent to a state? That is useless as it only relies on `name`, keeping your code as close to the original as possible:
```js
const [name, setName] = useState("DefaultName");
const nameBadgeComponent = <h2>My name is: {name}</h2>;
return (
<div className="App">
{nameBadgeComponent}
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
Context as to why it does not work: if you use `useState` you are defining the starting state, the code in there is never run again unless you use `setState` (in this case `setNameBadgeComponent`)
Therefore its never updated
EDIT:
The other answer defining the extra component is more reactish code, if you are going to accept an answer accept that one. |
62,673,738 | I have a pretty simple .csv table:
[![enter image description here](https://i.stack.imgur.com/q4fbi.png)](https://i.stack.imgur.com/q4fbi.png)
I use this table as an input parameter when creating a model in Create ML, where the target is PRICE, CITY and DATE are feature columns.
I need to get a price prediction for a giving date in the future for a particular city.
The code below gives a different price for different dates, as it should work, however, it gives the same result regardless of the given city:
```
let prediction = try? model.prediction(
CITY: name, DATE: date
)
let price = prediction?.PRICE
```
The price for a given date in the future in Paris should not be equal to the price for the same date in New York.
Do I really need to create 2 different models for each of the cities?
Thank you! | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673738",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/1213848/"
] | You can't create a component from a useState. A state is all the properties and data that define in what condition/appearance/etc the component is. A useState allows to control a piece of information or data, but doesn't respond to changes, a component does.
The fact that you put "Component" in the name should give you a hint ;)
What you want to do is probably this :
```
function NameBadgeComponent({ name }) {
return <h2>My name is: {name}</h2>;
}
export default function App() {
const [name, setName] = useState("DefaultName");
return (
<div className="App">
<NameBadgeComponent name={name} />
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
This will update to the changes in name properly. | Why are you extracting the nameBadgeComponent to a state? That is useless as it only relies on `name`, keeping your code as close to the original as possible:
```js
const [name, setName] = useState("DefaultName");
const nameBadgeComponent = <h2>My name is: {name}</h2>;
return (
<div className="App">
{nameBadgeComponent}
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
Context as to why it does not work: if you use `useState` you are defining the starting state, the code in there is never run again unless you use `setState` (in this case `setNameBadgeComponent`)
Therefore its never updated
EDIT:
The other answer defining the extra component is more reactish code, if you are going to accept an answer accept that one. |
62,673,739 | I have an Android App (written in Java) which retrieves a JSON object from the backend, parses it, and displays the data in the app. Everything is working fine (meaning that every field is being displayed correctly) except for one field. All the fields being displayed correctly are `String` whereas the one field which is causing the error is a string array!
Sample Object being retried from backend:
```
{
"attendance_type": "2",
"guest": [
"Test Guest",
"Test Guest 2"
],
"member_id": "1770428",
"attendance_time": "2020-04-27 04:42:22",
"name": "HENRY HHH",
"last_name": "",
"email": "henry@mailinator.com",
"onesignal_playerid": "",
"user_image": "311591.png",
"dateOfBirth": "06/22/1997",
"employeeID": "543210",
"socialSecurityNumber": "0000"
}
```
As I said, all the fields are being retrieved correctly except the "guest field"
This is the class in which everything is Serialized:
```
package com.lu.scanner.ui.attendance.model;
import com.google.gson.annotations.Expose;
import com.google.gson.annotations.SerializedName;
import java.util.List;
public class AttendanceDetails {
String date;
@SerializedName("attendance_type")
private String attendance_type;
@SerializedName("member_id")
private String member_id;
@SerializedName("attendance_date")
private String attendance_date;
@SerializedName("name")
private String name;
@SerializedName("email")
private String email;
@SerializedName("onesignal_playerid")
private String onesignal_playerid;
@SerializedName("user_image")
private String user_image;
@SerializedName("dateOfBirth")
private String dateOfBirth;
@SerializedName("employeeID")
private String employeeID;
@SerializedName("socialSecurityNumber")
private String socialSecurityNumber;
@SerializedName("attendance_time")
private String attendance_time;
@SerializedName("guest")
private String[] guest;
public AttendanceDetails(String date) {
this.date = date;
}
public String getDate() {
return date;
}
public void setDate(String date) {
this.date = date;
}
public String getAttendance_type() {
return attendance_type;
}
public void setAttendance_type(String attendance_type) {
this.attendance_type = attendance_type;
}
public String getMember_id() {
return member_id;
}
public void setMember_id(String member_id) {
this.member_id = member_id;
}
public String getAttendance_date() {
return attendance_date;
}
public void setAttendance_date(String attendance_date) {
this.attendance_date = attendance_date;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
public String getEmail() {
return email;
}
public void setEmail(String email) {
this.email = email;
}
public String getOnesignal_playerid() {
return onesignal_playerid;
}
public void setOnesignal_playerid(String onesignal_playerid) {
this.onesignal_playerid = onesignal_playerid;
}
public String getUser_image() {
return user_image;
}
public void setUser_image(String user_image) {
this.user_image = user_image;
}
public String getDateOfBirth() {
return dateOfBirth;
}
public void setDateOfBirth(String dateOfBirth) {
this.dateOfBirth = dateOfBirth;
}
public String getEmployeeID() {
return employeeID;
}
public void setEmployeeID(String employeeID) {
this.employeeID = employeeID;
}
public String getSocialSecurityNumber() {
return socialSecurityNumber;
}
public void setSocialSecurityNumber(String socialSecurityNumber) {
this.socialSecurityNumber = socialSecurityNumber;
}
public String getAttendance_time() {
return attendance_time;
}
public void setAttendance_time(String attendance_time) {
this.attendance_time = attendance_time;
}
public String[] getGuest(){
return guest;
}
public void setGuest(String[] guest){
this.guest=guest;
}
}
```
This is the SQLLite database:
```
private static final String CREATE_TABLE_ATTENDANCE_DETAILS = "CREATE TABLE IF NOT EXISTS " + TABLE_ATTENDANCE_DETAILS +
"( date TEXT , " +
"attendance_type TEXT, " +
"member_id TEXT, " +
"attendance_date TEXT, " +
"name TEXT, " +
"email TEXT, " +
"onesignal_playerid TEXT, " +
"user_image TEXT, " +
"dateOfBirth TEXT, " +
"employeeID TEXT, " +
"socialSecurityNumber TEXT, " +
"attendance_time TEXT, " +
"guest TEXT); ";
```
And finally, there is where the data is being retrieved:
```
public List<AttendanceDetails> getAllAttendanceDetails() {
List<AttendanceDetails> attendanceDetailsList = new ArrayList<AttendanceDetails>();
String selectQuery = "SELECT * FROM " + TABLE_ATTENDANCE_DETAILS;
SQLiteDatabase db = this.getWritableDatabase();
Cursor cursor = db.rawQuery(selectQuery, null);
if (cursor.moveToFirst()) {
do {
AttendanceDetails attendanceDetails = new AttendanceDetails();
attendanceDetails.setDate(cursor.getString(0));
attendanceDetails.setAttendance_type(cursor.getString(1));
attendanceDetails.setMember_id(cursor.getString(2));
attendanceDetails.setAttendance_date(cursor.getString(3));
attendanceDetails.setName(cursor.getString(4));
attendanceDetails.setEmail(cursor.getString(5));
attendanceDetails.setOnesignal_playerid(cursor.getString(6));
attendanceDetails.setUser_image(cursor.getString(7));
attendanceDetails.setDateOfBirth(cursor.getString(8));
attendanceDetails.setEmployeeID(cursor.getString(9));
attendanceDetails.setSocialSecurityNumber(cursor.getString(10));
attendanceDetails.setAttendance_time(cursor.getString(11));
attendanceDetails.setGuest(cursor.getString(12));
attendanceDetailsList.add(attendanceDetails);
} while (cursor.moveToNext());
}
return attendanceDetailsList;
}
```
Therefore, the main problem, I think, is that the TEXT type in the table creation is not compatible with the String array. Plus I think the `cursor.String()` function is not working for the `"guest"` string array properly. What can I do to make all of this code compatible with the `"guest"` field?
NOTE: Everything is working perfectly fine except for the guest field... | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673739",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/7015166/"
] | You can't create a component from a useState. A state is all the properties and data that define in what condition/appearance/etc the component is. A useState allows to control a piece of information or data, but doesn't respond to changes, a component does.
The fact that you put "Component" in the name should give you a hint ;)
What you want to do is probably this :
```
function NameBadgeComponent({ name }) {
return <h2>My name is: {name}</h2>;
}
export default function App() {
const [name, setName] = useState("DefaultName");
return (
<div className="App">
<NameBadgeComponent name={name} />
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
This will update to the changes in name properly. | Why are you extracting the nameBadgeComponent to a state? That is useless as it only relies on `name`, keeping your code as close to the original as possible:
```js
const [name, setName] = useState("DefaultName");
const nameBadgeComponent = <h2>My name is: {name}</h2>;
return (
<div className="App">
{nameBadgeComponent}
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
Context as to why it does not work: if you use `useState` you are defining the starting state, the code in there is never run again unless you use `setState` (in this case `setNameBadgeComponent`)
Therefore its never updated
EDIT:
The other answer defining the extra component is more reactish code, if you are going to accept an answer accept that one. |
62,673,756 | I am trying to autofill a text box based on the state from another component. Which works fine. But when I go to add ngModel to the text box to take in the value when submitting the form. The value is cleared and not displayed to the user.
```html
<div *ngIf="autoFill; else noFillMode" class="row">
<label>Mode of Transport</label>
<input type="text" value="{{state.type}}" readonly />
</div>
<div *ngIf="autoFill; else noFillStop" class="row">
<label>Nearby Stop / Station</label>
<input
type="text"
value="{{state.stopName}}"
[(ngModel)]="stopName"
readonly
/>
</div>
```
[![enter image description here](https://i.stack.imgur.com/iQq7g.png)](https://i.stack.imgur.com/iQq7g.png)
As you can see in the image the text shows up when there is no ngModel placed on the input. But when there is a ngModel it is not shown.
Can anyone help with this? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673756",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/4682626/"
] | You can't create a component from a useState. A state is all the properties and data that define in what condition/appearance/etc the component is. A useState allows to control a piece of information or data, but doesn't respond to changes, a component does.
The fact that you put "Component" in the name should give you a hint ;)
What you want to do is probably this :
```
function NameBadgeComponent({ name }) {
return <h2>My name is: {name}</h2>;
}
export default function App() {
const [name, setName] = useState("DefaultName");
return (
<div className="App">
<NameBadgeComponent name={name} />
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
This will update to the changes in name properly. | Why are you extracting the nameBadgeComponent to a state? That is useless as it only relies on `name`, keeping your code as close to the original as possible:
```js
const [name, setName] = useState("DefaultName");
const nameBadgeComponent = <h2>My name is: {name}</h2>;
return (
<div className="App">
{nameBadgeComponent}
<button
onClick={() => {
const newName = Math.random()
.toString(36)
.substring(3);
console.log("Renamed to:", newName);
setName(newName);
}}
>
rename
</button>
</div>
);
}
```
Context as to why it does not work: if you use `useState` you are defining the starting state, the code in there is never run again unless you use `setState` (in this case `setNameBadgeComponent`)
Therefore its never updated
EDIT:
The other answer defining the extra component is more reactish code, if you are going to accept an answer accept that one. |
62,673,771 | If i declare a function like below:
```
function a(){
//do something
}
```
And if i execute it with `a()` it gets putted onto the top of the callstack and gets popped of when its finished. While the function is in the callstack the main thread is blocked and cant do other things until the callstack is empty.
But what happens exactly with an `async` function?
if i do:
```
async function b() {
//do something
}
```
The function returns an promise and does not block the main thread. Does this mean this function gets passed to the web API instead of the callstack? and then after its done it gets passed to the callback que and then gets passed to the callstack and so that we can execute the callback function?
```
b.then(function() {
//do something after finish
})
``` | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673771",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/10990737/"
] | Yes, `async function`s will be on the call stack during their execution just like normal functions. You can consider the desugaring of the `await` keyword:
```
function log(x) {
console.log("at "+x);
}
function delay(t) {
return new Promise(resolve => setTimeout(resolve, t));
}
async function example() {
log(1);
await delay(50);
log(2);
return 3;
}
function example2() {
log(1);
return delay(50).then(function cont() {
log(2);
return 3;
})
}
```
An `example()` call works exactly like an `example2()` call. The called function gets pushed on the stack, calls `log(1)` which gets pushed on the stack and runs, when that returns (gets popped from the stack) it calls `delay(50)` which gets put on the stack and runs, and so on. Now, when the `await` operator is evaluated, the async function will take the operand promise, attach callbacks using a `.then()` call, and then *return from the `example()` call* a promise for the eventual completion of the function body. So when it reaches the `await` keyword, it gets popped from the stack, and the caller may continue running synchronous code (that will typically involve doing something with the returned promise).
Then, when the promise is resolved (in the above example, when the timeout is hit), jobs are scheduled to run the attached `then` callbacks. They might have to wait for the event loop to become idle (the callstack to become empty), and then when the promise job starts the code is put on the callstack again. In `example2`, this is the `cont` function that will get called, in the `async function` it will be continuation of the `example` body, where it left off at the `await`. In both cases, it calls `log(2)` which gets pushed on the stack and runs, and when that returns (gets popped from the stack) the `return 3` statement is executed which resolves the promise and pops the execution context from the stack, leaving it empty. | it does go onto the call stack, when its time for the async function to run it gets moved off of the call stack until the function is either fulfilled.or rejected. at this point it get moved into the callback queue while all other synchronous functions can continue to execute off of the callstack, then when the callstack is empty, the event loop looks in the callback queue for anything else that needs to be executed. if there is anything in the callback queue the event loop pushes the callback into the call stack to execute. |
62,673,780 | I need the basename of a file that is given as an argument to a bash script. The basename should be stripped of its file extension.
Let's assume `$1 = "/somefolder/andanotherfolder/myfile.txt"`, the desired output would be `"myfile"`.
The current attempt creates an intermediate variable that I would like to avoid:
```
BASE=$(basename "$1")
NOEXT="${BASE%.*}"
```
My attempt to make this a one-liner would be piping the output of basename. However, I do not know how to pipe stdout to a string substitution.
EDIT: this needs to work for multiple file extensions with possibly differing lengths, hence the string substitution attempt as given above. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908422/"
] | ```
NOEXT="${1##*/}"; NOEXT="${NOEXT%.*}"
``` | How about:
```
$ [[ $var =~ [^/]*$ ]] && echo ${BASH_REMATCH%.*}
myfile
``` |
62,673,780 | I need the basename of a file that is given as an argument to a bash script. The basename should be stripped of its file extension.
Let's assume `$1 = "/somefolder/andanotherfolder/myfile.txt"`, the desired output would be `"myfile"`.
The current attempt creates an intermediate variable that I would like to avoid:
```
BASE=$(basename "$1")
NOEXT="${BASE%.*}"
```
My attempt to make this a one-liner would be piping the output of basename. However, I do not know how to pipe stdout to a string substitution.
EDIT: this needs to work for multiple file extensions with possibly differing lengths, hence the string substitution attempt as given above. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908422/"
] | Why not Zoidberg ?
Ehhmm.. I meant why not remove the ext before going for basename ?
```
basename "${1%.*}"
```
Unless of course you have directory paths with dots, then you'll have to use basename before and remove the extension later:
```
echo $(basename "$1") | awk 'BEGIN { FS = "." }; { print $1 }'
```
The awk solution will remove **anything** after the first dot from the filename.
There's a regular expression based solution which uses sed to remove only the extension after last dot if it exists:
```
echo $(basename "$1") | sed 's/\(.*\)\..*/\1/'
```
This could even be improved if you're sure that you've got alphanumeric extensions of 3-4 characters (eg: mp3, mpeg, jpg, txt, json...)
```
echo $(basename "$1") | sed 's/\(.*\)\.[[:alnum:]]\{3\}$/\1/'
``` | How about:
```
$ [[ $var =~ [^/]*$ ]] && echo ${BASH_REMATCH%.*}
myfile
``` |
62,673,780 | I need the basename of a file that is given as an argument to a bash script. The basename should be stripped of its file extension.
Let's assume `$1 = "/somefolder/andanotherfolder/myfile.txt"`, the desired output would be `"myfile"`.
The current attempt creates an intermediate variable that I would like to avoid:
```
BASE=$(basename "$1")
NOEXT="${BASE%.*}"
```
My attempt to make this a one-liner would be piping the output of basename. However, I do not know how to pipe stdout to a string substitution.
EDIT: this needs to work for multiple file extensions with possibly differing lengths, hence the string substitution attempt as given above. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908422/"
] | How about this?
```sh
NEXT="$(basename -- "${1%.*}")"
```
Testing:
```sh
set -- '/somefolder/andanotherfolder/myfile.txt'
NEXT="$(basename -- "${1%.*}")"
echo "$NEXT"
```
>
> `myfile`
>
>
>
Alternatively:
```sh
set -- "${1%.*}"; NEXT="${1##*/}"
``` | How about:
```
$ [[ $var =~ [^/]*$ ]] && echo ${BASH_REMATCH%.*}
myfile
``` |
62,673,780 | I need the basename of a file that is given as an argument to a bash script. The basename should be stripped of its file extension.
Let's assume `$1 = "/somefolder/andanotherfolder/myfile.txt"`, the desired output would be `"myfile"`.
The current attempt creates an intermediate variable that I would like to avoid:
```
BASE=$(basename "$1")
NOEXT="${BASE%.*}"
```
My attempt to make this a one-liner would be piping the output of basename. However, I do not know how to pipe stdout to a string substitution.
EDIT: this needs to work for multiple file extensions with possibly differing lengths, hence the string substitution attempt as given above. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908422/"
] | Why not Zoidberg ?
Ehhmm.. I meant why not remove the ext before going for basename ?
```
basename "${1%.*}"
```
Unless of course you have directory paths with dots, then you'll have to use basename before and remove the extension later:
```
echo $(basename "$1") | awk 'BEGIN { FS = "." }; { print $1 }'
```
The awk solution will remove **anything** after the first dot from the filename.
There's a regular expression based solution which uses sed to remove only the extension after last dot if it exists:
```
echo $(basename "$1") | sed 's/\(.*\)\..*/\1/'
```
This could even be improved if you're sure that you've got alphanumeric extensions of 3-4 characters (eg: mp3, mpeg, jpg, txt, json...)
```
echo $(basename "$1") | sed 's/\(.*\)\.[[:alnum:]]\{3\}$/\1/'
``` | ```
NOEXT="${1##*/}"; NOEXT="${NOEXT%.*}"
``` |
62,673,780 | I need the basename of a file that is given as an argument to a bash script. The basename should be stripped of its file extension.
Let's assume `$1 = "/somefolder/andanotherfolder/myfile.txt"`, the desired output would be `"myfile"`.
The current attempt creates an intermediate variable that I would like to avoid:
```
BASE=$(basename "$1")
NOEXT="${BASE%.*}"
```
My attempt to make this a one-liner would be piping the output of basename. However, I do not know how to pipe stdout to a string substitution.
EDIT: this needs to work for multiple file extensions with possibly differing lengths, hence the string substitution attempt as given above. | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673780",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/6908422/"
] | Why not Zoidberg ?
Ehhmm.. I meant why not remove the ext before going for basename ?
```
basename "${1%.*}"
```
Unless of course you have directory paths with dots, then you'll have to use basename before and remove the extension later:
```
echo $(basename "$1") | awk 'BEGIN { FS = "." }; { print $1 }'
```
The awk solution will remove **anything** after the first dot from the filename.
There's a regular expression based solution which uses sed to remove only the extension after last dot if it exists:
```
echo $(basename "$1") | sed 's/\(.*\)\..*/\1/'
```
This could even be improved if you're sure that you've got alphanumeric extensions of 3-4 characters (eg: mp3, mpeg, jpg, txt, json...)
```
echo $(basename "$1") | sed 's/\(.*\)\.[[:alnum:]]\{3\}$/\1/'
``` | How about this?
```sh
NEXT="$(basename -- "${1%.*}")"
```
Testing:
```sh
set -- '/somefolder/andanotherfolder/myfile.txt'
NEXT="$(basename -- "${1%.*}")"
echo "$NEXT"
```
>
> `myfile`
>
>
>
Alternatively:
```sh
set -- "${1%.*}"; NEXT="${1##*/}"
``` |
62,673,785 | ```
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"VpcId": {
"Type": "AWS::EC2::VPC::Id",
"Description": "VpcId of your existing Virtual Private Cloud (VPC)",
"ConstraintDescription": "must be the VPC Id of an existing Virtual Private Cloud."
},
"Subnets": {
"Type": "List<AWS::EC2::Subnet::Id>",
"Description": "The list of SubnetIds in your Virtual Private Cloud (VPC)"
},
"InstanceType": {
"Description": "WebServer EC2 instance type",
"Type": "String",
"Default": "t2.small",
"AllowedValues": [
"t1.micro",
"t2.nano",
"t2.micro",
"t2.small",
"t2.medium",
"t2.large",
"m1.small",
"m1.medium",
"cg1.4xlarge"
],
"ConstraintDescription": "must be a valid EC2 instance type."
},
"WebServerCapacity": {
"Default": "2",
"Description": "The initial number of WebServer instances",
"Type": "Number",
"MinValue": "1",
"MaxValue": "10",
"ConstraintDescription": "must be between 1 and 10 EC2 instances."
},
"KeyName": {
"Description": "The EC2 Key Pair to allow SSH access to the instances",
"Type": "AWS::EC2::KeyPair::KeyName",
"ConstraintDescription": "must be the name of an existing EC2 KeyPair."
},
"SSHLocation": {
"Description": "The IP address range that can be used to SSH to the EC2 instances",
"Type": "String",
"MinLength": "9",
"MaxLength": "18",
"Default": "0.0.0.0/0",
"AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
"ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
}
},
"Resources": {
"WebServerScaleUpPolicy": {
"Type": "AWS::AutoScaling::ScalingPolicy",
"Properties": {
"AdjustmentType": "ChangeInCapacity",
"AutoScalingGroupName": {
"Ref": "WebServerGroup"
},
"Cooldown": "60",
"ScalingAdjustment": 1
}
},
"WebServerScaleDownPolicy": {
"Type": "AWS::AutoScaling::ScalingPolicy",
"Properties": {
"AdjustmentType": "ChangeInCapacity",
"AutoScalingGroupName": {
"Ref": "WebServerGroup"
},
"Cooldown": "60",
"ScalingAdjustment": -1
}
},
"CPUAlarmHigh": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmDescription": "Scale-up if CPU > 70% for 5 minutes",
"MetricName": "CPUUtilization",
"Namespace": "AWS/EC2",
"Statistic": "Average",
"Period": 300,
"EvaluationPeriods": 2,
"Threshold": 70,
"AlarmActions": [{
"Ref": "WebServerScaleUpPolicy"
}],
"Dimensions": [{
"Name": "AutoScalingGroupName",
"Value": {
"Ref": "WebServerGroup"
}
}],
"ComparisonOperator": "GreaterThanThreshold"
}
},
"CPUAlarmLow": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmDescription": "Scale-down if CPU < 40% for 5 minutes",
"MetricName": "CPUUtilization",
"Namespace": "AWS/EC2",
"Statistic": "Average",
"Period": 300,
"EvaluationPeriods": 2,
"Threshold": 40,
"AlarmActions": [{
"Ref": "WebServerScaleDownPolicy"
}],
"Dimensions": [{
"Name": "AutoScalingGroupName",
"Value": {
"Ref": "WebServerGroup"
}
}],
"ComparisonOperator": "LessThanThreshold"
}
},
"ApplicationLoadBalancer": {
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
"Properties": {
"Name": "elb-test",
"Scheme": "internet-facing",
"IpAddressType": "ipv4",
"Type": "application",
"Subnets": {
"Ref": "Subnets"
}
}
},
"ALBListener": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"DefaultActions": [{
"Type": "forward",
"TargetGroupArn": {
"Ref": "ALBTargetGroup"
}
}],
"LoadBalancerArn": {
"Ref": "ApplicationLoadBalancer"
},
"Port": 80,
"Protocol": "HTTP"
}
},
"ALBTargetGroup": {
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"Name": "ELB-Group",
"HealthCheckIntervalSeconds": 30,
"HealthCheckTimeoutSeconds": 5,
"HealthyThresholdCount": 3,
"Port": 80,
"Protocol": "HTTP",
"TargetType": "instance",
"UnhealthyThresholdCount": 5,
"VpcId": {
"Ref": "VpcId"
}
}
},
"WebServerGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"VPCZoneIdentifier": {
"Ref": "Subnets"
},
"HealthCheckGracePeriod": 300,
"LaunchConfigurationName": {
"Ref": "LaunchConfig"
},
"MinSize": "1",
"MaxSize": "8",
"DesiredCapacity": {
"Ref": "WebServerCapacity"
},
"TargetGroupARNs": [{
"Ref": "ALBTargetGroup"
}]
},
"CreationPolicy": {
"ResourceSignal": {
"Timeout": "PT5M",
"Count": {
"Ref": "WebServerCapacity"
}
}
},
"UpdatePolicy": {
"AutoScalingRollingUpdate": {
"MinInstancesInService": 1,
"MaxBatchSize": 1,
"PauseTime": "PT5M",
"WaitOnResourceSignals": true
}
}
},
"LaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"KeyName": {
"Ref": "KeyName"
},
"ImageId": "ami-00932e4c143f3fdf0",
"SecurityGroups": [{
"Ref": "InstanceSecurityGroup"
}],
"InstanceType": {
"Ref": "InstanceType"
},
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -xe\n",
"apt-get update -y\n",
"apt-get install -y python-setuptools\n",
"mkdir -p /opt/aws/bin\n",
"python /usr/lib/python2.7/dist-packages/easy_install.py --script-dir /opt/aws/bin https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n",
"/opt/aws/bin/cfn-init -v ",
" --stack ", { "Ref" : "AWS::StackName" },
" --resource EC2Instance ",
" --configsets full_install ",
" --region ", { "Ref" : "AWS::Region" }, "\n",
"/opt/aws/bin/cfn-signal -e $? ",
" --stack ", { "Ref" : "AWS::StackName" },
" --resource EC2Instance ",
" --region ", { "Ref" : "AWS::Region" }, "\n"
]]}}}
},
"InstanceSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Enable SSH access and HTTP from the load balancer only",
"SecurityGroupIngress": [{
"IpProtocol": "tcp",
"FromPort": 22,
"ToPort": 22,
"CidrIp": {
"Ref": "SSHLocation"
}
},
{
"IpProtocol": "tcp",
"FromPort": 80,
"ToPort": 80,
"SourceSecurityGroupId": {
"Fn::Select": [
0,
{
"Fn::GetAtt": [
"ApplicationLoadBalancer",
"SecurityGroups"
]
}
]
}
}
],
"VpcId": {
"Ref": "VpcId"
}
}
}
},
"Outputs": {
"URL": {
"Description": "The URL of the website",
"Value": {
"Fn::Join": [
"",
[
"http://",
{
"Fn::GetAtt": [
"ApplicationLoadBalancer",
"DNSName"
]
}
]
]
}
}
}
}
```
I am using this template to create auto-scaling with cloud formation and i am using ubuntu-18.04. Every time I am getting same error.
Received 0 SUCCESS signal(s) out of 1. Unable to satisfy 100% MinSuccessfulInstancesPercent requirement
Failed to receive 1 resource signal(s) for the current batch. Each resource signal timeout is counted as a FAILURE.
Please let me know where i am lacking | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673785",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/12409879/"
] | I have ran your template through [cfn-lint](https://github.com/aws-cloudformation/cfn-python-lint) and got a lot of problems reported:
```
W2030 You must specify a valid allowed value for InstanceType (cg1.4xlarge).
Valid values are ['a1.2xlarge', 'a1.4xlarge', 'a1.large', 'a1.medium', 'a1.metal', 'a1.xlarge', 'c1.medium', 'c1.xlarge', 'c3.2xlarge', 'c3.4xlarge', 'c3.8xlarge', 'c3.large', 'c3.xlarge', 'c4.2xlarge', 'c4.4xlarge', 'c4.8xlarge', 'c4.large', 'c4.xlarge', 'c5.12xlarge', 'c5.18xlarge', 'c5.24xlarge', 'c5.2xlarge', 'c5.4xlarge', 'c5.9xlarge', 'c5.large', 'c5.metal', 'c5.xlarge', 'c5d.12xlarge', 'c5d.18xlarge', 'c5d.24xlarge', 'c5d.2xlarge', 'c5d.4xlarge', 'c5d.9xlarge', 'c5d.large', 'c5d.metal', 'c5d.xlarge', 'c5n.18xlarge', 'c5n.2xlarge', 'c5n.4xlarge', 'c5n.9xlarge', 'c5n.large', 'c5n.metal', 'c5n.xlarge', 'cc2.8xlarge', 'cr1.8xlarge', 'd2.2xlarge', 'd2.4xlarge', 'd2.8xlarge', 'd2.xlarge', 'f1.16xlarge', 'f1.2xlarge', 'f1.4xlarge', 'g2.2xlarge', 'g2.8xlarge', 'g3.16xlarge', 'g3.4xlarge', 'g3.8xlarge', 'g3s.xlarge', 'g4dn.12xlarge', 'g4dn.16xlarge', 'g4dn.2xlarge', 'g4dn.4xlarge', 'g4dn.8xlarge', 'g4dn.metal', 'g4dn.xlarge', 'h1.16xlarge', 'h1.2xlarge', 'h1.4xlarge', 'h1.8xlarge', 'hs1.8xlarge', 'i2.2xlarge', 'i2.4xlarge', 'i2.8xlarge', 'i2.xlarge', 'i3.16xlarge', 'i3.2xlarge', 'i3.4xlarge', 'i3.8xlarge', 'i3.large', 'i3.metal', 'i3.xlarge', 'i3en.12xlarge', 'i3en.24xlarge', 'i3en.2xlarge', 'i3en.3xlarge', 'i3en.6xlarge', 'i3en.large', 'i3en.metal', 'i3en.xlarge', 'm1.large', 'm1.medium', 'm1.small', 'm1.xlarge', 'm2.2xlarge', 'm2.4xlarge', 'm2.xlarge', 'm3.2xlarge', 'm3.large', 'm3.medium', 'm3.xlarge', 'm4.10xlarge', 'm4.16xlarge', 'm4.2xlarge', 'm4.4xlarge', 'm4.large', 'm4.xlarge', 'm5.12xlarge', 'm5.16xlarge', 'm5.24xlarge', 'm5.2xlarge', 'm5.4xlarge', 'm5.8xlarge', 'm5.large', 'm5.metal', 'm5.xlarge', 'm5a.12xlarge', 'm5a.16xlarge', 'm5a.24xlarge', 'm5a.2xlarge', 'm5a.4xlarge', 'm5a.8xlarge', 'm5a.large', 'm5a.xlarge', 'm5ad.12xlarge', 'm5ad.24xlarge', 'm5ad.2xlarge', 'm5ad.4xlarge', 'm5ad.large', 'm5ad.xlarge', 'm5d.12xlarge', 'm5d.16xlarge', 'm5d.24xlarge', 'm5d.2xlarge', 'm5d.4xlarge', 'm5d.8xlarge', 'm5d.large', 'm5d.metal', 'm5d.xlarge', 'm5dn.12xlarge', 'm5dn.16xlarge', 'm5dn.24xlarge', 'm5dn.2xlarge', 'm5dn.4xlarge', 'm5dn.8xlarge', 'm5dn.large', 'm5dn.metal', 'm5dn.xlarge', 'm5n.12xlarge', 'm5n.16xlarge', 'm5n.24xlarge', 'm5n.2xlarge', 'm5n.4xlarge', 'm5n.8xlarge', 'm5n.large', 'm5n.metal', 'm5n.xlarge', 'p2.16xlarge', 'p2.8xlarge', 'p2.xlarge', 'p3.16xlarge', 'p3.2xlarge', 'p3.8xlarge', 'p3dn.24xlarge', 'r3.2xlarge', 'r3.4xlarge', 'r3.8xlarge', 'r3.large', 'r3.xlarge', 'r4.16xlarge', 'r4.2xlarge', 'r4.4xlarge', 'r4.8xlarge', 'r4.large', 'r4.xlarge', 'r5.12xlarge', 'r5.16xlarge', 'r5.24xlarge', 'r5.2xlarge', 'r5.4xlarge', 'r5.8xlarge', 'r5.large', 'r5.metal', 'r5.xlarge', 'r5a.12xlarge', 'r5a.16xlarge', 'r5a.24xlarge', 'r5a.2xlarge', 'r5a.4xlarge', 'r5a.8xlarge', 'r5a.large', 'r5a.xlarge', 'r5ad.12xlarge', 'r5ad.24xlarge', 'r5ad.2xlarge', 'r5ad.4xlarge', 'r5ad.large', 'r5ad.xlarge', 'r5d.12xlarge', 'r5d.16xlarge', 'r5d.24xlarge', 'r5d.2xlarge', 'r5d.4xlarge', 'r5d.8xlarge', 'r5d.large', 'r5d.metal', 'r5d.xlarge', 'r5dn.12xlarge', 'r5dn.16xlarge', 'r5dn.24xlarge', 'r5dn.2xlarge', 'r5dn.4xlarge', 'r5dn.8xlarge', 'r5dn.large', 'r5dn.metal', 'r5dn.xlarge', 'r5n.12xlarge', 'r5n.16xlarge', 'r5n.24xlarge', 'r5n.2xlarge', 'r5n.4xlarge', 'r5n.8xlarge', 'r5n.large', 'r5n.metal', 'r5n.xlarge', 't1.micro', 't2.2xlarge', 't2.large', 't2.medium', 't2.micro', 't2.nano', 't2.small', 't2.xlarge', 't3.2xlarge', 't3.large', 't3.medium', 't3.micro', 't3.nano', 't3.small', 't3.xlarge', 't3a.2xlarge', 't3a.large', 't3a.medium', 't3a.micro', 't3a.nano', 't3a.small', 't3a.xlarge', 'u-18tb1.metal', 'u-24tb1.metal', 'x1.16xlarge', 'x1.32xlarge', 'x1e.16xlarge', 'x1e.2xlarge', 'x1e.32xlarge', 'x1e.4xlarge', 'x1e.8xlarge', 'x1e.xlarge', 'z1d.12xlarge', 'z1d.2xlarge', 'z1d.3xlarge', 'z1d.6xlarge', 'z1d.large', 'z1d.metal', 'z1d.xlarge']
so.template:27:17
W7001 Mapping 'AWSInstanceType2Arch' is defined but not used
so.template:55:9
W7001 Mapping 'AWSInstanceType2NATArch' is defined but not used
so.template:87:9
E3012 Property Resources/WebServerScaleUpPolicy/Properties/ScalingAdjustment should be of type Integer
so.template:120:17
E3012 Property Resources/WebServerScaleDownPolicy/Properties/ScalingAdjustment should be of type Integer
so.template:131:17
E3012 Property Resources/CPUAlarmHigh/Properties/Period should be of type Integer
so.template:141:17
E3012 Property Resources/CPUAlarmHigh/Properties/EvaluationPeriods should be of type Integer
so.template:142:17
E3012 Property Resources/CPUAlarmHigh/Properties/Threshold should be of type Double
so.template:143:17
E3012 Property Resources/CPUAlarmLow/Properties/Period should be of type Integer
so.template:163:17
E3012 Property Resources/CPUAlarmLow/Properties/EvaluationPeriods should be of type Integer
so.template:164:17
E3012 Property Resources/CPUAlarmLow/Properties/Threshold should be of type Double
so.template:165:17
E3012 Property Resources/ALBListener/Properties/Port should be of type Integer
so.template:202:17
E3002 Invalid Property Resources/ALBTargetGroup/Properties/HealthCheckType
so.template:217:17
E3016 Value for MinInstancesInService must be of type Integer
so.template:251:11
E3016 Value for MaxBatchSize must be of type Integer
so.template:252:11
E3016 Value for WaitOnResourceSignals must be of type Boolean
so.template:254:11
E3012 Property Resources/InstanceSecurityGroup/Properties/SecurityGroupIngress/0/FromPort should be of type Integer
so.template:280:25
E3012 Property Resources/InstanceSecurityGroup/Properties/SecurityGroupIngress/0/ToPort should be of type Integer
so.template:281:25
E3012 Property Resources/InstanceSecurityGroup/Properties/SecurityGroupIngress/1/FromPort should be of type Integer
so.template:288:25
E3012 Property Resources/InstanceSecurityGroup/Properties/SecurityGroupIngress/1/ToPort should be of type Integer
so.template:289:25
```
I'd suggest you fix these issues first. | This is coming down to `HealthCheckType` being in the target group resource, it should instead be attached to your autoscaling group.
The fixed template for this error is below
```
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"VpcId": {
"Type": "AWS::EC2::VPC::Id",
"Description": "VpcId of your existing Virtual Private Cloud (VPC)",
"ConstraintDescription": "must be the VPC Id of an existing Virtual Private Cloud."
},
"Subnets": {
"Type": "List<AWS::EC2::Subnet::Id>",
"Description": "The list of SubnetIds in your Virtual Private Cloud (VPC)"
},
"InstanceType": {
"Description": "WebServer EC2 instance type",
"Type": "String",
"Default": "t2.small",
"AllowedValues": [
"t1.micro",
"t2.nano",
"t2.micro",
"t2.small",
"t2.medium",
"t2.large",
"m1.small",
"m1.medium",
"cg1.4xlarge"
],
"ConstraintDescription": "must be a valid EC2 instance type."
},
"WebServerCapacity": {
"Default": "2",
"Description": "The initial number of WebServer instances",
"Type": "Number",
"MinValue": "1",
"MaxValue": "10",
"ConstraintDescription": "must be between 1 and 10 EC2 instances."
},
"KeyName": {
"Description": "The EC2 Key Pair to allow SSH access to the instances",
"Type": "AWS::EC2::KeyPair::KeyName",
"ConstraintDescription": "must be the name of an existing EC2 KeyPair."
},
"SSHLocation": {
"Description": "The IP address range that can be used to SSH to the EC2 instances",
"Type": "String",
"MinLength": "9",
"MaxLength": "18",
"Default": "0.0.0.0/0",
"AllowedPattern": "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})",
"ConstraintDescription": "must be a valid IP CIDR range of the form x.x.x.x/x."
}
},
"Mappings": {
"AWSInstanceType2Arch": {
"t1.micro": {
"Arch": "HVM64"
},
"t2.nano": {
"Arch": "HVM64"
},
"t2.micro": {
"Arch": "HVM64"
},
"t2.small": {
"Arch": "HVM64"
},
"t2.medium": {
"Arch": "HVM64"
},
"t2.large": {
"Arch": "HVM64"
},
"m1.small": {
"Arch": "HVM64"
},
"m1.medium": {
"Arch": "HVM64"
},
"m1.large": {
"Arch": "HVM64"
},
"d2.xlarge": {
"Arch": "HVM64"
}
},
"AWSInstanceType2NATArch": {
"t1.micro": {
"Arch": "NATHVM64"
},
"t2.nano": {
"Arch": "NATHVM64"
},
"t2.micro": {
"Arch": "NATHVM64"
},
"t2.small": {
"Arch": "NATHVM64"
},
"t2.medium": {
"Arch": "NATHVM64"
},
"t2.large": {
"Arch": "NATHVM64"
},
"m1.small": {
"Arch": "NATHVM64"
}
}
},
"Resources": {
"WebServerScaleUpPolicy": {
"Type": "AWS::AutoScaling::ScalingPolicy",
"Properties": {
"AdjustmentType": "ChangeInCapacity",
"AutoScalingGroupName": {
"Ref": "WebServerGroup"
},
"Cooldown": "60",
"ScalingAdjustment": "1"
}
},
"WebServerScaleDownPolicy": {
"Type": "AWS::AutoScaling::ScalingPolicy",
"Properties": {
"AdjustmentType": "ChangeInCapacity",
"AutoScalingGroupName": {
"Ref": "WebServerGroup"
},
"Cooldown": "60",
"ScalingAdjustment": "-1"
}
},
"CPUAlarmHigh": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmDescription": "Scale-up if CPU > 70% for 5 minutes",
"MetricName": "CPUUtilization",
"Namespace": "AWS/EC2",
"Statistic": "Average",
"Period": "300",
"EvaluationPeriods": "2",
"Threshold": "70",
"AlarmActions": [{
"Ref": "WebServerScaleUpPolicy"
}],
"Dimensions": [{
"Name": "AutoScalingGroupName",
"Value": {
"Ref": "WebServerGroup"
}
}],
"ComparisonOperator": "GreaterThanThreshold"
}
},
"CPUAlarmLow": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"AlarmDescription": "Scale-down if CPU < 40% for 5 minutes",
"MetricName": "CPUUtilization",
"Namespace": "AWS/EC2",
"Statistic": "Average",
"Period": "300",
"EvaluationPeriods": "2",
"Threshold": "40",
"AlarmActions": [{
"Ref": "WebServerScaleDownPolicy"
}],
"Dimensions": [{
"Name": "AutoScalingGroupName",
"Value": {
"Ref": "WebServerGroup"
}
}],
"ComparisonOperator": "LessThanThreshold"
}
},
"ApplicationLoadBalancer": {
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer",
"Properties": {
"Name": "elb-test",
"Scheme": "internet-facing",
"IpAddressType": "ipv4",
"Type": "application",
"Subnets": {
"Ref": "Subnets"
}
}
},
"ALBListener": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"DefaultActions": [{
"Type": "forward",
"TargetGroupArn": {
"Ref": "ALBTargetGroup"
}
}],
"LoadBalancerArn": {
"Ref": "ApplicationLoadBalancer"
},
"Port": "80",
"Protocol": "HTTP"
}
},
"ALBTargetGroup": {
"Type": "AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"Name": "ELB-Group",
"HealthCheckIntervalSeconds": 30,
"HealthCheckTimeoutSeconds": 5,
"HealthyThresholdCount": 3,
"Port": 80,
"Protocol": "HTTP",
"TargetType": "instance",
"UnhealthyThresholdCount": 5,
"VpcId": {
"Ref": "VpcId"
}
}
},
"WebServerGroup": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"VPCZoneIdentifier": {
"Ref": "Subnets"
},
"HealthCheckType": "ELB",
"HealthCheckGracePeriod": 300
"LaunchConfigurationName": {
"Ref": "LaunchConfig"
},
"MinSize": "1",
"MaxSize": "8",
"DesiredCapacity": {
"Ref": "WebServerCapacity"
},
"TargetGroupARNs": [{
"Ref": "ALBTargetGroup"
}]
},
"CreationPolicy": {
"ResourceSignal": {
"Timeout": "PT5M",
"Count": {
"Ref": "WebServerCapacity"
}
}
},
"UpdatePolicy": {
"AutoScalingRollingUpdate": {
"MinInstancesInService": "1",
"MaxBatchSize": "1",
"PauseTime": "PT5M",
"WaitOnResourceSignals": "true"
}
}
},
"LaunchConfig": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"KeyName": {
"Ref": "KeyName"
},
"ImageId": "ami-00932e4c143f3fdf0",
"SecurityGroups": [{
"Ref": "InstanceSecurityGroup"
}],
"InstanceType": {
"Ref": "InstanceType"
},
"UserData": {
"Fn::Base64": {
"Fn::Join": [
"",
[
"#!/bin/bash -x\n",
"# Install the files and packages from the metadata\n",
"/opt/aws/bin/cfn-init -v ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource MyInstance ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n",
"# Signal the status from cfn-init\n",
"/opt/aws/bin/cfn-signal -e $? ",
" --stack ",
{
"Ref": "AWS::StackName"
},
" --resource MyInstance ",
" --region ",
{
"Ref": "AWS::Region"
},
"\n"
]
]
}
}
}
},
"InstanceSecurityGroup": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Enable SSH access and HTTP from the load balancer only",
"SecurityGroupIngress": [{
"IpProtocol": "tcp",
"FromPort": "22",
"ToPort": "22",
"CidrIp": {
"Ref": "SSHLocation"
}
},
{
"IpProtocol": "tcp",
"FromPort": "80",
"ToPort": "80",
"SourceSecurityGroupId": {
"Fn::Select": [
0,
{
"Fn::GetAtt": [
"ApplicationLoadBalancer",
"SecurityGroups"
]
}
]
}
}
],
"VpcId": {
"Ref": "VpcId"
}
}
}
},
"Outputs": {
"URL": {
"Description": "The URL of the website",
"Value": {
"Fn::Join": [
"",
[
"http://",
{
"Fn::GetAtt": [
"ApplicationLoadBalancer",
"DNSName"
]
}
]
]
}
}
}
}
``` |
62,673,787 | What I'm trying to get is the immediate upper div value (Accounts Admin) when I'm clicking on a radio button.
Below is my HTML structure. Whenever I click on the admin or alan radio button give me the div inner text Accounts Admin.
How can get that value?
```html
<div class="display-flex flex-row">
<div class="p-1 ez-sub-heading mt-1">
Accounts Admin
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input
id="admin"
type="radio"
value="admin"
name="eng-name"
class="eng-name userlist active"
/>
</div>
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input
id="alan"
type="radio"
value="alan"
name="eng-name"
class="eng-name userlist active"
/>
</div>
</div>
</div>
```
I'm trying this way but I can't get the value.
$(this).parent().find(".ez-sub-heading").text()
[![](https://i.stack.imgur.com/xvGUA.png)](https://i.stack.imgur.com/xvGUA.png) | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673787",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/2414345/"
] | Assuming you have multiple blocks of this, I would change the html structure a bit so you have a wrapper around the complete block. With the `closest()` function you can find the first wrapper parent and then use `find()` to look for the header element.
You can use `trim()` to remove the whitespace caused by code indention.
```js
$('.userlist').on('click', (e) => {
var text = $(e.currentTarget).closest('.wrapper').find('.ez-sub-heading').text();
console.log(text.trim());
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="display-flex flex-row wrapper">
<div class="p-1 ez-sub-heading mt-1">
Accounts Admin
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="admin" type="radio" value="admin" name="eng-name" class="eng-name userlist active">
</div>
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="alan" type="radio" value="alan" name="eng-name" class="eng-name userlist active">
</div>
</div>
</div>
</div>
<div class="display-flex flex-row wrapper">
<div class="p-1 ez-sub-heading mt-1">
Accounts Admin 2
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="admin" type="radio" value="admin" name="eng-name2" class="eng-name userlist active">
</div>
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="alan" type="radio" value="alan" name="eng-name2" class="eng-name userlist active">
</div>
</div>
</div>
</div>
``` | With your HTML structure as it is, you need to find the closest (nearest parent) `.row` and then use `prevAll` to get the header row.
```
$(this).closest(".row").prevAll(".flex-row").first().text().trim()
```
```js
$("input.userlist").click(function() {
console.log($(this).closest(".row").prevAll(".flex-row").first().text().trim())
});
```
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div class="display-flex flex-row">
<div class="p-1 ez-sub-heading mt-1">
Accounts Admin
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="admin" type="radio" value="admin" name="eng-name" class="eng-name userlist active">
</div>
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="alan" type="radio" value="alan" name="eng-name" class="eng-name userlist active">
</div>
</div>
</div>
<div class="display-flex flex-row">
<div class="p-1 ez-sub-heading mt-1">
Second Header
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="admin" type="radio" value="admin" name="eng-name" class="eng-name userlist active">
</div>
</div>
</div>
<div class="row m-0">
<div class="col-auto">
<div>
<input id="alan" type="radio" value="alan" name="eng-name" class="eng-name userlist active">
</div>
</div>
</div>
```
A small change to the HTML structure and adding some classes would make this much simpler (not included here as already suggested in the other answer). |
62,673,802 | I am using the lightning chart and would like to make the background transparent I tried setting the following on my chart but it just gives me a black background
```
.setChartBackgroundFillStyle( new SolidFill({
// A (alpha) is optional - 255 by default
color: ColorRGBA(255, 0, 0,0)
})
)
.setBackgroundFillStyle( new SolidFill({
// A (alpha) is optional - 255 by default
color: ColorRGBA(255, 0, 0,0)
})
)
.setFittingRectangleFillStyle(
new SolidFill({
// A (alpha) is optional - 255 by default
color: ColorRGBA(255, 0, 0,0)
})
)
.setChartBackgroundStrokeStyle(emptyLine)
.setBackgroundStrokeStyle(emptyLine)
.setFittingRectangleStrokeStyle(emptyLine)
```
Do let me know what I'm missing out | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673802",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/13061693/"
] | In LightningChart JS version 2.0.0 and above it's possible to set the chart background color to transparent with [`chart.setChartBackgroundFillStyle(transparentFill)`](https://www.arction.com/lightningchart-js-api-documentation/v2.0.0/classes/chartxy.html#setchartbackgroundfillstyle) and [`chart.setBackgroundFillStyle(transparentFill)`](https://www.arction.com/lightningchart-js-api-documentation/v2.0.0/classes/chartxy.html#setbackgroundfillstyle)
```js
const {
lightningChart,
transparentFill,
emptyFill
} = lcjs
const chart = lightningChart().ChartXY()
.setChartBackgroundFillStyle(transparentFill)
.setBackgroundFillStyle(emptyFill)
```
```css
body {
background-color: red;
}
```
```html
<script src="https://unpkg.com/@arction/lcjs@2.0.0/dist/lcjs.iife.js"></script>
```
---
Old answer:
LightningChart JS currently doesn't support transparent background. Version 2.0 will support transparent backgrounds.
When version 2.0 is released you can use `.setBackgroundFillStyle(emptyFill)` and `.setChartBackgroundFillStyle(emptyFill)` to make a chart with transparent background. Alternatively you can use a `SolidFill` with partially transparent color. | You can use css "**background-color:transparent**" |
62,673,821 | One of our client asks to get all the video that they uploaded to the system. The files are stored at s3. Client expect to get one link that will download archive with all the videos.
Is there a way to create such an archive without downloading files archiving it and uploading back to aws?
So far I didn't find the solution.
Is it possible to do it with glacier, or move the files to folder and expose it? | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673821",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/8870505/"
] | Unfortunately, you **can't create a zip-like archives from existing objects** directly on S3. Similarly you can't transfer them to Glacier to do this. Glacier is not going to produce a single zip or rar (or any time of) archive from multiple s3 objects for you.
Instead, you have to **download** them first, zip or rar (or use which ever archiving format you prefer), and the re-upload to S3. Then you can share the zip/rar with your customers.
There is also [a possibility](https://stackoverflow.com/a/19327086/248823) of using multi-part AWS API to merge S3 objects without downloading them. But this requires programming custom solution to merge objects (not creating zip/rar-type archives). | You can create a glacier archive for a specific prefix (what you see as a subfolder) by using AWS lifecycle rules to perform this action.
More information available [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html).
**Original answer**
There is no native way to be able to retrieve all objects as an archive via S3.
S3 simply exposes all objects as they are uploaded, unfortunately you will need to perform the archiving as a separate process afterwards. |
62,673,839 | I use some approaches similar to the following one in Java:
```
public static void main(String[] args) {
Scanner scan = new Scanner(System.in);
int[] a= new int[3];
//assign inputs
for (int i=0;i<3;i++)
a[i] = scan.nextInt();
scan.close();
//print inputs
for(int j=0;j<3;j++)
System.out.println(a[j]);
}
```
However, generally the first input parameter is length and for this reason I use an extra counter (c) in order to distinguish the first element. When using this approach, the scanner does not read inputs one by one and checking the first and other elements in two blocks seems to be redundant.
```
// input format: size of 3 and these elements (4, 5, 6)
// 3
// 4 5 6
public static void getInput() {
int n = 0; //size
int c = 0; //counter for distinguish the first index
int sum = 0; //
int[] array = null;
Scanner scan = new Scanner(System.in);
System.out.println("Enter size:");
//block I: check the first element (size of array) and assign it
if (scan.nextInt() <= 0)
System.out.println("n value must be greater than 0");
else {
n = scan.nextInt();
array = new int[n];
}
System.out.println("Enter array elements:");
//block II: check the other elements adn assign them
while(scan.hasNextInt() && c<n) {
if (scan.nextInt() >= 100) {
System.out.println("Array elements must be lower than 100");
} else {
array[c] = scan.nextInt();
c++;
}
}
scan.close();
int sum = 0;
for (int j = 0; j < n; j++) {
sum += array[j];
}
System.out.println("Sum = " + sum);
}
```
My question is "how can I modify this approach with a single block (`while` and `for` loop inside `while`)? I tried 5-6 different variations but none of them works properly?" | 2020/07/01 | [
"https://Stackoverflow.com/questions/62673839",
"https://Stackoverflow.com",
"https://Stackoverflow.com/users/-1/"
] | Unfortunately, you **can't create a zip-like archives from existing objects** directly on S3. Similarly you can't transfer them to Glacier to do this. Glacier is not going to produce a single zip or rar (or any time of) archive from multiple s3 objects for you.
Instead, you have to **download** them first, zip or rar (or use which ever archiving format you prefer), and the re-upload to S3. Then you can share the zip/rar with your customers.
There is also [a possibility](https://stackoverflow.com/a/19327086/248823) of using multi-part AWS API to merge S3 objects without downloading them. But this requires programming custom solution to merge objects (not creating zip/rar-type archives). | You can create a glacier archive for a specific prefix (what you see as a subfolder) by using AWS lifecycle rules to perform this action.
More information available [here](https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html).
**Original answer**
There is no native way to be able to retrieve all objects as an archive via S3.
S3 simply exposes all objects as they are uploaded, unfortunately you will need to perform the archiving as a separate process afterwards. |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 3